Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
249,907 | 7,965,270,671 | IssuesEvent | 2018-07-14 05:47:56 | esteemapp/esteem-surfer | https://api.github.com/repos/esteemapp/esteem-surfer | closed | Sort comments with number of Votes by default | high priority | Let's try default sorting to be by number of votes... encourages more comment voters

| 1.0 | Sort comments with number of Votes by default - Let's try default sorting to be by number of votes... encourages more comment voters

| priority | sort comments with number of votes by default let s try default sorting to be by number of votes encourages more comment voters | 1 |
79,589 | 3,537,431,090 | IssuesEvent | 2016-01-18 00:44:35 | pombase/canto | https://api.github.com/repos/pombase/canto | closed | SF-Trac error | high priority next | Not really Canto related but I just tried to access SF-Trac and got this error:
Error
TracError: The Trac Environment needs to be upgraded.
Run "trac-admin /data/pombase/trac-sf-copy upgrade" | 1.0 | SF-Trac error - Not really Canto related but I just tried to access SF-Trac and got this error:
Error
TracError: The Trac Environment needs to be upgraded.
Run "trac-admin /data/pombase/trac-sf-copy upgrade" | priority | sf trac error not really canto related but i just tried to access sf trac and got this error error tracerror the trac environment needs to be upgraded run trac admin data pombase trac sf copy upgrade | 1 |
492,376 | 14,201,191,467 | IssuesEvent | 2020-11-16 07:14:07 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.xvideos.com - desktop site instead of mobile site | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61890 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.xvideos.com/videos-i-like#0
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android
**Tested Another Browser**: Yes Safari
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201108174701</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/18cc2c3d-ad1b-4edc-967d-51672fd97281)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.xvideos.com - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61890 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.xvideos.com/videos-i-like#0
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android
**Tested Another Browser**: Yes Safari
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201108174701</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/18cc2c3d-ad1b-4edc-967d-51672fd97281)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser yes safari problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce view the screenshot browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
127,996 | 5,042,256,235 | IssuesEvent | 2016-12-19 13:23:42 | robotology/icub-tests | https://api.github.com/repos/robotology/icub-tests | opened | Missing installation doc for macOS and Windows | Platform: macOS Platform: Windows Priority: High Status: In Progress Type: Enhancement | Missing installation documentation for LD_LIBRARY_PATH equivalent for macOS and Windows in
https://robotology.github.io/icub-tests/doxygen/doc/html/installation.html | 1.0 | Missing installation doc for macOS and Windows - Missing installation documentation for LD_LIBRARY_PATH equivalent for macOS and Windows in
https://robotology.github.io/icub-tests/doxygen/doc/html/installation.html | priority | missing installation doc for macos and windows missing installation documentation for ld library path equivalent for macos and windows in | 1 |
614,564 | 19,185,608,984 | IssuesEvent | 2021-12-05 05:55:30 | SLCommunity/511 | https://api.github.com/repos/SLCommunity/511 | closed | USER.ROLE 판별 | Feature/Function Status:Done Priority:High | ## 목적
USER, Admin 판별 후 라우팅
## 작업 상세 내용
- [x] frontend 접근 막기
- [x] backend API접근 막기
## 참고사항
ajaxPrefilter와 ajax beforeSend의 차이점에 대해 조금 더 알아볼 예정입니다. | 1.0 | USER.ROLE 판별 - ## 목적
USER, Admin 판별 후 라우팅
## 작업 상세 내용
- [x] frontend 접근 막기
- [x] backend API접근 막기
## 참고사항
ajaxPrefilter와 ajax beforeSend의 차이점에 대해 조금 더 알아볼 예정입니다. | priority | user role 판별 목적 user admin 판별 후 라우팅 작업 상세 내용 frontend 접근 막기 backend api접근 막기 참고사항 ajaxprefilter와 ajax beforesend의 차이점에 대해 조금 더 알아볼 예정입니다 | 1 |
172,626 | 6,514,741,030 | IssuesEvent | 2017-08-26 05:08:53 | ncssar/radiolog | https://api.github.com/repos/ncssar/radiolog | opened | get rid of 'printing' message dialog | bug Priority:High | This window serves no purpose, and causes confusion - might have actually led the operator to hit some button that caused a crash? Anyway - just get rid of it - which would address #33 and #263 | 1.0 | get rid of 'printing' message dialog - This window serves no purpose, and causes confusion - might have actually led the operator to hit some button that caused a crash? Anyway - just get rid of it - which would address #33 and #263 | priority | get rid of printing message dialog this window serves no purpose and causes confusion might have actually led the operator to hit some button that caused a crash anyway just get rid of it which would address and | 1 |
323,873 | 9,879,770,396 | IssuesEvent | 2019-06-24 10:52:02 | telstra/open-kilda | https://api.github.com/repos/telstra/open-kilda | closed | add `diverse-with` to get /flow | feature priority/2-high | When a flows set or a single flow requested, we should return all flow IDs this flow diverse with.
Parameter name suggestion: `diverse-with` | 1.0 | add `diverse-with` to get /flow - When a flows set or a single flow requested, we should return all flow IDs this flow diverse with.
Parameter name suggestion: `diverse-with` | priority | add diverse with to get flow when a flows set or a single flow requested we should return all flow ids this flow diverse with parameter name suggestion diverse with | 1 |
282,587 | 8,708,221,793 | IssuesEvent | 2018-12-06 10:16:52 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | GraphQL get objects by relation ID | priority: high status: confirmed type: bug 🐛 | <!-- ⚠️ If you do not respect this template your issue will be closed. -->
<!-- =============================================================================== -->
<!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. -->
<!-- Please see the wiki for guides on upgrading to the latest release. -->
<!-- =============================================================================== -->
<!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
<!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 10.x.x -->
<!-- npm 6.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**: 10.8.0
- **npm version**: 6.2.0
- **Strapi version**: alpha 14.4
- **Database**: pg
- **Operating system**: windows/wsl
**What is the current behavior?**
When I define the `where` clause of a related object for the parent ID, I get wrong information for nested properties.
**Steps to reproduce the problem**
So my model `event` consists of multiple `registrations`.
I can ask either for
1: `event(id: 6) {registrations{}}`
or for
2: `registrations(where: {event: 6}) {}`
while `event(id)` delivers the correct data, `registrations(where)` does not.
**Query 1:**
```
event(id: 6) {
registrations{
user {
id
username
}
}
}
```
**Response 1:**
```
"event": {
"registrations": [
{
"user": {
"id": "87",
"username": "kio"
}
},
{
"user": {
"id": "26",
"username": "chitatz"
}
},
{
"user": {
"id": "10",
"username": "akiru"
}
},
{
"user": {
"id": "56",
"username": "famis"
}
}
]
}
```
**Query 2:**
```
registrations(where: {event: 6}) {
id
user {
id
username
}
}
```
**Response 2:**
```
{
"registrations": [
{
"id": "17",
"user": {
"id": "39",
"username": "dawn"
}
},
{
"id": "18",
"user": {
"id": "39",
"username": "dawn"
}
},
{
"id": "19",
"user": {
"id": "39",
"username": "dawn"
}
},
{
"id": "20",
"user": {
"id": "39",
"username": "dawn"
}
}
]
}
```
**What is the expected behavior?**
relation where selectors should behave as object(id) selector
**Suggested solutions**
I believe this might be sql specific, but I can't be sure.
Make sure the related objects are properly resolved, I don't know why a user is returned that is not even in the list of registrations. | 1.0 | GraphQL get objects by relation ID - <!-- ⚠️ If you do not respect this template your issue will be closed. -->
<!-- =============================================================================== -->
<!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. -->
<!-- Please see the wiki for guides on upgrading to the latest release. -->
<!-- =============================================================================== -->
<!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
<!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 10.x.x -->
<!-- npm 6.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**: 10.8.0
- **npm version**: 6.2.0
- **Strapi version**: alpha 14.4
- **Database**: pg
- **Operating system**: windows/wsl
**What is the current behavior?**
When I define the `where` clause of a related object for the parent ID, I get wrong information for nested properties.
**Steps to reproduce the problem**
So my model `event` consists of multiple `registrations`.
I can ask either for
1: `event(id: 6) {registrations{}}`
or for
2: `registrations(where: {event: 6}) {}`
while `event(id)` delivers the correct data, `registrations(where)` does not.
**Query 1:**
```
event(id: 6) {
registrations{
user {
id
username
}
}
}
```
**Response 1:**
```
"event": {
"registrations": [
{
"user": {
"id": "87",
"username": "kio"
}
},
{
"user": {
"id": "26",
"username": "chitatz"
}
},
{
"user": {
"id": "10",
"username": "akiru"
}
},
{
"user": {
"id": "56",
"username": "famis"
}
}
]
}
```
**Query 2:**
```
registrations(where: {event: 6}) {
id
user {
id
username
}
}
```
**Response 2:**
```
{
"registrations": [
{
"id": "17",
"user": {
"id": "39",
"username": "dawn"
}
},
{
"id": "18",
"user": {
"id": "39",
"username": "dawn"
}
},
{
"id": "19",
"user": {
"id": "39",
"username": "dawn"
}
},
{
"id": "20",
"user": {
"id": "39",
"username": "dawn"
}
}
]
}
```
**What is the expected behavior?**
relation where selectors should behave as object(id) selector
**Suggested solutions**
I believe this might be sql specific, but I can't be sure.
Make sure the related objects are properly resolved, I don't know why a user is returned that is not even in the list of registrations. | priority | graphql get objects by relation id informations node js version npm version strapi version alpha database pg operating system windows wsl what is the current behavior when i define the where clause of a related object for the parent id i get wrong information for nested properties steps to reproduce the problem so my model event consists of multiple registrations i can ask either for event id registrations or for registrations where event while event id delivers the correct data registrations where does not query event id registrations user id username response event registrations user id username kio user id username chitatz user id username akiru user id username famis query registrations where event id user id username response registrations id user id username dawn id user id username dawn id user id username dawn id user id username dawn what is the expected behavior relation where selectors should behave as object id selector suggested solutions i believe this might be sql specific but i can t be sure make sure the related objects are properly resolved i don t know why a user is returned that is not even in the list of registrations | 1 |
704,686 | 24,206,105,866 | IssuesEvent | 2022-09-25 08:41:47 | AY2223S1-CS2103T-T13-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-T13-4/tp | closed | As a user, I can add my contact's phone number | type.Story priority.High | so that I do not have to remember their phone number | 1.0 | As a user, I can add my contact's phone number - so that I do not have to remember their phone number | priority | as a user i can add my contact s phone number so that i do not have to remember their phone number | 1 |
224,587 | 7,471,934,230 | IssuesEvent | 2018-04-03 10:54:39 | ballerina-lang/composer | https://api.github.com/repos/ballerina-lang/composer | closed | [Intermittent] 404 when loading the composer | Imported Priority/High Severity/Major Type/Bug cloud component/Composer | Release 0.93
404 is observed when trying to open the composer

| 1.0 | [Intermittent] 404 when loading the composer - Release 0.93
404 is observed when trying to open the composer

| priority | when loading the composer release is observed when trying to open the composer | 1 |
550,833 | 16,133,180,566 | IssuesEvent | 2021-04-29 08:25:41 | 389ds/389-ds-base | https://api.github.com/repos/389ds/389-ds-base | closed | 389ds coredump on the 1st server while installing an IPA replica | priority_high | The nightly test `test_integration/test_fips.py::TestInstallFIPS` failed in PR #[809](https://github.com/freeipa-pr-ci2/freeipa/pull/809) while installing a replica with CA.
This is simlar to ipa #[8765](https://pagure.io/freeipa/issue/8765) / 389-ds #[4670](https://github.com/389ds/389-ds-base/issues/4670) but the coredump happens on the master at a different place.
```
Mar 27 21:13:08 master.ipa.test systemd-coredump[35807]: Process 34105 (ns-slapd) of user 389 dumped core.
Stack trace of thread 35774:
#0 0x00007f1ef2e250d7 dblayer_bulk_start (libback-ldbm.so + 0x260d7)
#1 0x00007f1ef2d46e06 clcache_load_buffer_bulk (libreplication-plugin.so + 0x2ce06)
#2 0x00007f1ef2d48228 clcache_load_buffer (libreplication-plugin.so + 0x2e228)
#3 0x00007f1ef2d4be76 _cl5PositionCursorForReplay (libreplication-plugin.so + 0x31e76)
#4 0x00007f1ef2d4c5e3 cl5CreateReplayIterator (libreplication-plugin.so + 0x325e3)
#5 0x00007f1ef2d5f787 repl5_inc_run (libreplication-plugin.so + 0x45787)
#6 0x00007f1ef2d66231 prot_thread_main (libreplication-plugin.so + 0x4c231)
#7 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#8 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#9 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34105:
#0 0x00007f1ef7cd9a5f __poll (libc.so.6 + 0xf6a5f)
#1 0x00007f1ef79b9d46 _pr_poll_with_poll (libnspr4.so + 0x2bd46)
#2 0x0000562234913ab8 slapd_daemon (ns-slapd + 0x87ab8)
#3 0x000056223490761a main (ns-slapd + 0x7b61a)
#4 0x00007f1ef7c0b1e2 __libc_start_main (libc.so.6 + 0x281e2)
#5 0x00005622349088ae _start (ns-slapd + 0x7c8ae)
Stack trace of thread 34106:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e6b5df deadlock_threadmain (libback-ldbm.so + 0x6c5df)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34107:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e70def checkpoint_threadmain (libback-ldbm.so + 0x71def)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34108:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e6b427 trickle_threadmain (libback-ldbm.so + 0x6c427)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34109:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e6b064 perf_threadmain (libback-ldbm.so + 0x6c064)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34111:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef2c84a51 roles_cache_wait_on_change (libroles-plugin.so + 0x6a51)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34112:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef2c84a51 roles_cache_wait_on_change (libroles-plugin.so + 0x6a51)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34113:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef2c84a51 roles_cache_wait_on_change (libroles-plugin.so + 0x6a51)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34114:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x000056223491682a housecleaning (ns-slapd + 0x8a82a)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34115:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef79afbc8 PR_WaitCondVar (libnspr4.so + 0x21bc8)
#2 0x00007f1ef7ebceab eq_loop (libslapd.so.0 + 0xf7eab)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34116:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef7ebccae eq_loop_rel (libslapd.so.0 + 0xf7cae)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34118:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34119:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34120:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34121:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34122:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34123:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34124:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34125:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34126:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34127:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34128:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef33afce3 __db_hybrid_mutex_suspend (libdb-5.3.so + 0x36ce3)
#2 0x00007f1ef33b00b1 __db_tas_mutex_lock_int (libdb-5.3.so + 0x370b1)
#3 0x00007f1ef345cbfe __lock_get_internal (libdb-5.3.so + 0xe3bfe)
#4 0x00007f1ef345e390 __lock_get (libdb-5.3.so + 0xe5390)
#5 0x00007f1ef34873a5 __db_lget (libdb-5.3.so + 0x10e3a5)
#6 0x00007f1ef33cc58a __bam_search (libdb-5.3.so + 0x5358a)
#7 0x00007f1ef33ba72b __bamc_search (libdb-5.3.so + 0x4172b)
#8 0x00007f1ef33bb1a8 __bamc_put (libdb-5.3.so + 0x421a8)
#9 0x00007f1ef34714fa __dbc_iput (libdb-5.3.so + 0xf84fa)
#10 0x00007f1ef347615a __db_put (libdb-5.3.so + 0xfd15a)
#11 0x00007f1ef3484fbe __db_put_pp (libdb-5.3.so + 0x10bfbe)
#12 0x00007f1ef2e73db6 bdb_public_db_op (libback-ldbm.so + 0x74db6)
#13 0x00007f1ef2d4a287 _cl5WriteOperationTxn (libreplication-plugin.so + 0x30287)
#14 0x00007f1ef2d4b57f cl5WriteOperationTxn (libreplication-plugin.so + 0x3157f)
#15 0x00007f1ef2d63009 write_changelog_and_ruv (libreplication-plugin.so + 0x49009)
#16 0x00007f1ef2d63481 multimaster_mmr_postop (libreplication-plugin.so + 0x49481)
#17 0x00007f1ef7ef618d plugin_call_mmr_plugin_postop (libslapd.so.0 + 0x13118d)
#18 0x00007f1ef2e533d7 ldbm_back_modify (libback-ldbm.so + 0x543d7)
#19 0x00007f1ef7ee588a op_shared_modify (libslapd.so.0 + 0x12088a)
#20 0x00007f1ef7ee64fd do_modify (libslapd.so.0 + 0x1214fd)
#21 0x0000562234910f8f connection_threadmain (ns-slapd + 0x84f8f)
#22 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#23 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#24 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34129:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34130:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34131:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34133:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34134:
#0 0x00007f1ef7cd9a5f __poll (libc.so.6 + 0xf6a5f)
#1 0x00007f1ef79b9d46 _pr_poll_with_poll (libnspr4.so + 0x2bd46)
#2 0x0000562234912ede accept_thread (ns-slapd + 0x86ede)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34378:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223491ba4c ps_send_results (ns-slapd + 0x8fa4c)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34382:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223491ba4c ps_send_results (ns-slapd + 0x8fa4c)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34385:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223491ba4c ps_send_results (ns-slapd + 0x8fa4c)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35202:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef6a57783 sync_send_results (libcontentsync-plugin.so + 0x7783)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35291:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef6a57783 sync_send_results (libcontentsync-plugin.so + 0x7783)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35732:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef2d49650 _cl5TrimMain (libreplication-plugin.so + 0x2f650)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35733:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef2d5b84e protocol_sleep (libreplication-plugin.so + 0x4184e)
#2 0x00007f1ef2d60dfb repl5_inc_run (libreplication-plugin.so + 0x46dfb)
#3 0x00007f1ef2d66231 prot_thread_main (libreplication-plugin.so + 0x4c231)
#4 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#5 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#6 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35773:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef2d49650 _cl5TrimMain (libreplication-plugin.so + 0x2f650)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34110:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef6a69179 cos_cache_wait_on_change (libcos-plugin.so + 0x9179)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34132:
#0 0x00007f1ef7cdc3fb fsync (libc.so.6 + 0xf93fb)
#1 0x00007f1ef79b6988 pt_Fsync (libnspr4.so + 0x28988)
#2 0x00007f1ef7f39867 log_flush_buffer.constprop.0 (libslapd.so.0 + 0x174867)
#3 0x00007f1ef7eda37e vslapd_log_access (libslapd.so.0 + 0x11537e)
#4 0x00007f1ef7eda521 slapi_log_access (libslapd.so.0 + 0x115521)
#5 0x00007f1ef7f0a871 log_result (libslapd.so.0 + 0x145871)
#6 0x00007f1ef7f0b1ed send_ldap_result_ext (libslapd.so.0 + 0x1461ed)
#7 0x00007f1ef7f0b5ff send_ldap_result (libslapd.so.0 + 0x1465ff)
#8 0x00007f1ef7ef166d op_shared_search (libslapd.so.0 + 0x12c66d)
#9 0x0000562234920bf2 do_search (ns-slapd + 0x94bf2)
#10 0x0000562234911420 connection_threadmain (ns-slapd + 0x85420)
#11 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#12 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#13 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Mar 27 21:13:09 master.ipa.test systemd[1]: systemd-coredump@0-35806-0.service: Succeeded.
```
The installed version is 389-ds-base-2.0.3-20210327git741e7a72a.fc33.x86_64 taken from the nightly copr @389ds/389-ds-base-nightly.
Companion issue on IPA side: https://pagure.io/freeipa/issue/8778 | 1.0 | 389ds coredump on the 1st server while installing an IPA replica - The nightly test `test_integration/test_fips.py::TestInstallFIPS` failed in PR #[809](https://github.com/freeipa-pr-ci2/freeipa/pull/809) while installing a replica with CA.
This is simlar to ipa #[8765](https://pagure.io/freeipa/issue/8765) / 389-ds #[4670](https://github.com/389ds/389-ds-base/issues/4670) but the coredump happens on the master at a different place.
```
Mar 27 21:13:08 master.ipa.test systemd-coredump[35807]: Process 34105 (ns-slapd) of user 389 dumped core.
Stack trace of thread 35774:
#0 0x00007f1ef2e250d7 dblayer_bulk_start (libback-ldbm.so + 0x260d7)
#1 0x00007f1ef2d46e06 clcache_load_buffer_bulk (libreplication-plugin.so + 0x2ce06)
#2 0x00007f1ef2d48228 clcache_load_buffer (libreplication-plugin.so + 0x2e228)
#3 0x00007f1ef2d4be76 _cl5PositionCursorForReplay (libreplication-plugin.so + 0x31e76)
#4 0x00007f1ef2d4c5e3 cl5CreateReplayIterator (libreplication-plugin.so + 0x325e3)
#5 0x00007f1ef2d5f787 repl5_inc_run (libreplication-plugin.so + 0x45787)
#6 0x00007f1ef2d66231 prot_thread_main (libreplication-plugin.so + 0x4c231)
#7 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#8 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#9 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34105:
#0 0x00007f1ef7cd9a5f __poll (libc.so.6 + 0xf6a5f)
#1 0x00007f1ef79b9d46 _pr_poll_with_poll (libnspr4.so + 0x2bd46)
#2 0x0000562234913ab8 slapd_daemon (ns-slapd + 0x87ab8)
#3 0x000056223490761a main (ns-slapd + 0x7b61a)
#4 0x00007f1ef7c0b1e2 __libc_start_main (libc.so.6 + 0x281e2)
#5 0x00005622349088ae _start (ns-slapd + 0x7c8ae)
Stack trace of thread 34106:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e6b5df deadlock_threadmain (libback-ldbm.so + 0x6c5df)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34107:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e70def checkpoint_threadmain (libback-ldbm.so + 0x71def)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34108:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e6b427 trickle_threadmain (libback-ldbm.so + 0x6c427)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34109:
#0 0x00007f1ef7cdc1eb __select (libc.so.6 + 0xf91eb)
#1 0x00007f1ef7f27824 DS_Sleep (libslapd.so.0 + 0x162824)
#2 0x00007f1ef2e6b064 perf_threadmain (libback-ldbm.so + 0x6c064)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34111:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef2c84a51 roles_cache_wait_on_change (libroles-plugin.so + 0x6a51)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34112:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef2c84a51 roles_cache_wait_on_change (libroles-plugin.so + 0x6a51)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34113:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef2c84a51 roles_cache_wait_on_change (libroles-plugin.so + 0x6a51)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34114:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x000056223491682a housecleaning (ns-slapd + 0x8a82a)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34115:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef79afbc8 PR_WaitCondVar (libnspr4.so + 0x21bc8)
#2 0x00007f1ef7ebceab eq_loop (libslapd.so.0 + 0xf7eab)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34116:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef7ebccae eq_loop_rel (libslapd.so.0 + 0xf7cae)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34118:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34119:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34120:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34121:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34122:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34123:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34124:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34125:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34126:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34127:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34128:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef33afce3 __db_hybrid_mutex_suspend (libdb-5.3.so + 0x36ce3)
#2 0x00007f1ef33b00b1 __db_tas_mutex_lock_int (libdb-5.3.so + 0x370b1)
#3 0x00007f1ef345cbfe __lock_get_internal (libdb-5.3.so + 0xe3bfe)
#4 0x00007f1ef345e390 __lock_get (libdb-5.3.so + 0xe5390)
#5 0x00007f1ef34873a5 __db_lget (libdb-5.3.so + 0x10e3a5)
#6 0x00007f1ef33cc58a __bam_search (libdb-5.3.so + 0x5358a)
#7 0x00007f1ef33ba72b __bamc_search (libdb-5.3.so + 0x4172b)
#8 0x00007f1ef33bb1a8 __bamc_put (libdb-5.3.so + 0x421a8)
#9 0x00007f1ef34714fa __dbc_iput (libdb-5.3.so + 0xf84fa)
#10 0x00007f1ef347615a __db_put (libdb-5.3.so + 0xfd15a)
#11 0x00007f1ef3484fbe __db_put_pp (libdb-5.3.so + 0x10bfbe)
#12 0x00007f1ef2e73db6 bdb_public_db_op (libback-ldbm.so + 0x74db6)
#13 0x00007f1ef2d4a287 _cl5WriteOperationTxn (libreplication-plugin.so + 0x30287)
#14 0x00007f1ef2d4b57f cl5WriteOperationTxn (libreplication-plugin.so + 0x3157f)
#15 0x00007f1ef2d63009 write_changelog_and_ruv (libreplication-plugin.so + 0x49009)
#16 0x00007f1ef2d63481 multimaster_mmr_postop (libreplication-plugin.so + 0x49481)
#17 0x00007f1ef7ef618d plugin_call_mmr_plugin_postop (libslapd.so.0 + 0x13118d)
#18 0x00007f1ef2e533d7 ldbm_back_modify (libback-ldbm.so + 0x543d7)
#19 0x00007f1ef7ee588a op_shared_modify (libslapd.so.0 + 0x12088a)
#20 0x00007f1ef7ee64fd do_modify (libslapd.so.0 + 0x1214fd)
#21 0x0000562234910f8f connection_threadmain (ns-slapd + 0x84f8f)
#22 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#23 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#24 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34129:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34130:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34131:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34133:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223490f417 connection_threadmain (ns-slapd + 0x83417)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34134:
#0 0x00007f1ef7cd9a5f __poll (libc.so.6 + 0xf6a5f)
#1 0x00007f1ef79b9d46 _pr_poll_with_poll (libnspr4.so + 0x2bd46)
#2 0x0000562234912ede accept_thread (ns-slapd + 0x86ede)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34378:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223491ba4c ps_send_results (ns-slapd + 0x8fa4c)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34382:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223491ba4c ps_send_results (ns-slapd + 0x8fa4c)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34385:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x000056223491ba4c ps_send_results (ns-slapd + 0x8fa4c)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35202:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef6a57783 sync_send_results (libcontentsync-plugin.so + 0x7783)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35291:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef6a57783 sync_send_results (libcontentsync-plugin.so + 0x7783)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35732:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef2d49650 _cl5TrimMain (libreplication-plugin.so + 0x2f650)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35733:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef2d5b84e protocol_sleep (libreplication-plugin.so + 0x4184e)
#2 0x00007f1ef2d60dfb repl5_inc_run (libreplication-plugin.so + 0x46dfb)
#3 0x00007f1ef2d66231 prot_thread_main (libreplication-plugin.so + 0x4c231)
#4 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#5 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#6 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 35773:
#0 0x00007f1ef79539e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf9e8)
#1 0x00007f1ef2d49650 _cl5TrimMain (libreplication-plugin.so + 0x2f650)
#2 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#3 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#4 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34110:
#0 0x00007f1ef79536c2 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xf6c2)
#1 0x00007f1ef7f12f1d slapi_wait_condvar_pt (libslapd.so.0 + 0x14df1d)
#2 0x00007f1ef6a69179 cos_cache_wait_on_change (libcos-plugin.so + 0x9179)
#3 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#4 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#5 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Stack trace of thread 34132:
#0 0x00007f1ef7cdc3fb fsync (libc.so.6 + 0xf93fb)
#1 0x00007f1ef79b6988 pt_Fsync (libnspr4.so + 0x28988)
#2 0x00007f1ef7f39867 log_flush_buffer.constprop.0 (libslapd.so.0 + 0x174867)
#3 0x00007f1ef7eda37e vslapd_log_access (libslapd.so.0 + 0x11537e)
#4 0x00007f1ef7eda521 slapi_log_access (libslapd.so.0 + 0x115521)
#5 0x00007f1ef7f0a871 log_result (libslapd.so.0 + 0x145871)
#6 0x00007f1ef7f0b1ed send_ldap_result_ext (libslapd.so.0 + 0x1461ed)
#7 0x00007f1ef7f0b5ff send_ldap_result (libslapd.so.0 + 0x1465ff)
#8 0x00007f1ef7ef166d op_shared_search (libslapd.so.0 + 0x12c66d)
#9 0x0000562234920bf2 do_search (ns-slapd + 0x94bf2)
#10 0x0000562234911420 connection_threadmain (ns-slapd + 0x85420)
#11 0x00007f1ef79b9150 _pt_root (libnspr4.so + 0x2b150)
#12 0x00007f1ef794d3f9 start_thread (libpthread.so.0 + 0x93f9)
#13 0x00007f1ef7ce4b53 __clone (libc.so.6 + 0x101b53)
Mar 27 21:13:09 master.ipa.test systemd[1]: systemd-coredump@0-35806-0.service: Succeeded.
```
The installed version is 389-ds-base-2.0.3-20210327git741e7a72a.fc33.x86_64 taken from the nightly copr @389ds/389-ds-base-nightly.
Companion issue on IPA side: https://pagure.io/freeipa/issue/8778 | priority | coredump on the server while installing an ipa replica the nightly test test integration test fips py testinstallfips failed in pr while installing a replica with ca this is simlar to ipa ds but the coredump happens on the master at a different place mar master ipa test systemd coredump process ns slapd of user dumped core stack trace of thread dblayer bulk start libback ldbm so clcache load buffer bulk libreplication plugin so clcache load buffer libreplication plugin so libreplication plugin so libreplication plugin so inc run libreplication plugin so prot thread main libreplication plugin so pt root so start thread libpthread so clone libc so stack trace of thread poll libc so pr poll with poll so slapd daemon ns slapd main ns slapd libc start main libc so start ns slapd stack trace of thread select libc so ds sleep libslapd so deadlock threadmain libback ldbm so pt root so start thread libpthread so clone libc so stack trace of thread select libc so ds sleep libslapd so checkpoint threadmain libback ldbm so pt root so start thread libpthread so clone libc so stack trace of thread select libc so ds sleep libslapd so trickle threadmain libback ldbm so pt root so start thread libpthread so clone libc so stack trace of thread select libc so ds sleep libslapd so perf threadmain libback ldbm so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so slapi wait condvar pt libslapd so roles cache wait on change libroles plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so slapi wait condvar pt libslapd so roles cache wait on change libroles plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so slapi wait condvar pt libslapd so roles cache wait on change libroles plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond timedwait glibc libpthread so housecleaning ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so pr waitcondvar so eq loop libslapd so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond timedwait glibc libpthread so eq loop rel libslapd so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so db hybrid mutex suspend libdb so db tas mutex lock int libdb so lock get internal libdb so lock get libdb so db lget libdb so bam search libdb so bamc search libdb so bamc put libdb so dbc iput libdb so db put libdb so db put pp libdb so bdb public db op libback ldbm so libreplication plugin so libreplication plugin so write changelog and ruv libreplication plugin so multimaster mmr postop libreplication plugin so plugin call mmr plugin postop libslapd so ldbm back modify libback ldbm so op shared modify libslapd so do modify libslapd so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so connection threadmain ns slapd pt root so start thread libpthread so clone libc so stack trace of thread poll libc so pr poll with poll so accept thread ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so ps send results ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so ps send results ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so ps send results ns slapd pt root so start thread libpthread so clone libc so stack trace of thread pthread cond timedwait glibc libpthread so sync send results libcontentsync plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond timedwait glibc libpthread so sync send results libcontentsync plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond timedwait glibc libpthread so libreplication plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond timedwait glibc libpthread so protocol sleep libreplication plugin so inc run libreplication plugin so prot thread main libreplication plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond timedwait glibc libpthread so libreplication plugin so pt root so start thread libpthread so clone libc so stack trace of thread pthread cond wait glibc libpthread so slapi wait condvar pt libslapd so cos cache wait on change libcos plugin so pt root so start thread libpthread so clone libc so stack trace of thread fsync libc so pt fsync so log flush buffer constprop libslapd so vslapd log access libslapd so slapi log access libslapd so log result libslapd so send ldap result ext libslapd so send ldap result libslapd so op shared search libslapd so do search ns slapd connection threadmain ns slapd pt root so start thread libpthread so clone libc so mar master ipa test systemd systemd coredump service succeeded the installed version is ds base taken from the nightly copr ds base nightly companion issue on ipa side | 1 |
118,958 | 4,758,403,224 | IssuesEvent | 2016-10-24 19:23:10 | NashTeamAlpha/BangazonWeb | https://api.github.com/repos/NashTeamAlpha/BangazonWeb | closed | Add Items from BangazonAPI Startup file | First Wave High priority | ## Feature Name
1. Add MVC service to Startup.cs file
1. Add CORS service to Startup.cs file
1. Add string path to Startup.cs file
Basically, copy and paste lines 31 through 51 (approx) from the BangazonAPI Startup.cs file to the current project's Startup.cs
| 1.0 | Add Items from BangazonAPI Startup file - ## Feature Name
1. Add MVC service to Startup.cs file
1. Add CORS service to Startup.cs file
1. Add string path to Startup.cs file
Basically, copy and paste lines 31 through 51 (approx) from the BangazonAPI Startup.cs file to the current project's Startup.cs
| priority | add items from bangazonapi startup file feature name add mvc service to startup cs file add cors service to startup cs file add string path to startup cs file basically copy and paste lines through approx from the bangazonapi startup cs file to the current project s startup cs | 1 |
442,215 | 12,741,781,118 | IssuesEvent | 2020-06-26 07:01:24 | Eastrall/Rhisis | https://api.github.com/repos/Eastrall/Rhisis | closed | Mars Mine: Mobs take no damage and drop nothing | bug priority: high srv: world sys: battle v0.4.x | # :beetle: Bug Report
**Rhisis version:** v0.4.3
## Expected Behavior
Mobs take damage and drop loot.
## Current Behavior
Mobs don't even register taking damage and drop nothing after relog.
## Steps to Reproduce
1. Enter Mars Mine.
2. Attack Mutant Feferns.
3. Relog.
4. **Mobs now take damage** but won't drop loot.
| 1.0 | Mars Mine: Mobs take no damage and drop nothing - # :beetle: Bug Report
**Rhisis version:** v0.4.3
## Expected Behavior
Mobs take damage and drop loot.
## Current Behavior
Mobs don't even register taking damage and drop nothing after relog.
## Steps to Reproduce
1. Enter Mars Mine.
2. Attack Mutant Feferns.
3. Relog.
4. **Mobs now take damage** but won't drop loot.
| priority | mars mine mobs take no damage and drop nothing beetle bug report rhisis version expected behavior mobs take damage and drop loot current behavior mobs don t even register taking damage and drop nothing after relog steps to reproduce enter mars mine attack mutant feferns relog mobs now take damage but won t drop loot | 1 |
566,607 | 16,825,275,389 | IssuesEvent | 2021-06-17 17:40:38 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Unable to view notification template in UI | component:ui priority:high state:needs_devel type:bug | <!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://webchat.freenode.net/?channels=ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Bug Report
##### SUMMARY
<!-- Briefly describe the problem. -->
I am trying to create a webhook notification template. When I first created the notification template, I was greeted with a blank screen and these javascript errors:
```
worker-json.js:1 GET https://my-awx-url.com/worker-json.js 404
react-dom.production.min.js:209 TypeError: console.warning is not a function
at Ec (CodeEditor.jsx:88)
at Zo (react-dom.production.min.js:153)
at Ss (react-dom.production.min.js:261)
at vl (react-dom.production.min.js:246)
at ml (react-dom.production.min.js:246)
at sl (react-dom.production.min.js:239)
at react-dom.production.min.js:123
at t.unstable_runWithPriority (scheduler.production.min.js:19)
at Vi (react-dom.production.min.js:122)
at Yi (react-dom.production.min.js:123)
asyncToGenerator.js:6 Uncaught (in promise) TypeError: console.warning is not a function
at Ec (CodeEditor.jsx:88)
at Zo (react-dom.production.min.js:153)
at Ss (react-dom.production.min.js:261)
at vl (react-dom.production.min.js:246)
at ml (react-dom.production.min.js:246)
at sl (react-dom.production.min.js:239)
at react-dom.production.min.js:123
at t.unstable_runWithPriority (scheduler.production.min.js:19)
at Vi (react-dom.production.min.js:122)
at Yi (react-dom.production.min.js:123)
useWebsocket.js:20 WebSocket is already in CLOSING or CLOSED state.
```
If I go back to my AWX homepage and go to the notifications screen however, I see that the template has still been created. I can edit the template fine, but if I try and click the template: `#/notification_templates/1/details`, I get the same results from above.
##### ENVIRONMENT
* AWX version: 19.1.0
* AWX install method: kubernetes
* Ansible version: X.Y.Z
* Operating System:
* Web Browser: Chrome
##### STEPS TO REPRODUCE
<!-- Please describe exactly how to reproduce the problem. -->
1. Add a new notification template and fill out all required fields (I am working with webhook notifications but it seems this problem persists across all notification types)
2. Press "save" on the create notification template screen
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
The notification template to be created and for me to be redirected to its page successfully (e.g. `#/notification_templates/1/details`)
##### ACTUAL RESULTS
<!-- What actually happened? -->
I get a blank screen and the javascript errors mentioned above.
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
| 1.0 | Unable to view notification template in UI - <!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://webchat.freenode.net/?channels=ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Bug Report
##### SUMMARY
<!-- Briefly describe the problem. -->
I am trying to create a webhook notification template. When I first created the notification template, I was greeted with a blank screen and these javascript errors:
```
worker-json.js:1 GET https://my-awx-url.com/worker-json.js 404
react-dom.production.min.js:209 TypeError: console.warning is not a function
at Ec (CodeEditor.jsx:88)
at Zo (react-dom.production.min.js:153)
at Ss (react-dom.production.min.js:261)
at vl (react-dom.production.min.js:246)
at ml (react-dom.production.min.js:246)
at sl (react-dom.production.min.js:239)
at react-dom.production.min.js:123
at t.unstable_runWithPriority (scheduler.production.min.js:19)
at Vi (react-dom.production.min.js:122)
at Yi (react-dom.production.min.js:123)
asyncToGenerator.js:6 Uncaught (in promise) TypeError: console.warning is not a function
at Ec (CodeEditor.jsx:88)
at Zo (react-dom.production.min.js:153)
at Ss (react-dom.production.min.js:261)
at vl (react-dom.production.min.js:246)
at ml (react-dom.production.min.js:246)
at sl (react-dom.production.min.js:239)
at react-dom.production.min.js:123
at t.unstable_runWithPriority (scheduler.production.min.js:19)
at Vi (react-dom.production.min.js:122)
at Yi (react-dom.production.min.js:123)
useWebsocket.js:20 WebSocket is already in CLOSING or CLOSED state.
```
If I go back to my AWX homepage and go to the notifications screen however, I see that the template has still been created. I can edit the template fine, but if I try and click the template: `#/notification_templates/1/details`, I get the same results from above.
##### ENVIRONMENT
* AWX version: 19.1.0
* AWX install method: kubernetes
* Ansible version: X.Y.Z
* Operating System:
* Web Browser: Chrome
##### STEPS TO REPRODUCE
<!-- Please describe exactly how to reproduce the problem. -->
1. Add a new notification template and fill out all required fields (I am working with webhook notifications but it seems this problem persists across all notification types)
2. Press "save" on the create notification template screen
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
The notification template to be created and for me to be redirected to its page successfully (e.g. `#/notification_templates/1/details`)
##### ACTUAL RESULTS
<!-- What actually happened? -->
I get a blank screen and the javascript errors mentioned above.
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
| priority | unable to view notification template in ui issues are for concrete actionable bugs and feature requests only if you re just asking for debugging help or technical support please use we have to limit this because of limited volunteer time to respond to issues issue type bug report summary i am trying to create a webhook notification template when i first created the notification template i was greeted with a blank screen and these javascript errors worker json js get react dom production min js typeerror console warning is not a function at ec codeeditor jsx at zo react dom production min js at ss react dom production min js at vl react dom production min js at ml react dom production min js at sl react dom production min js at react dom production min js at t unstable runwithpriority scheduler production min js at vi react dom production min js at yi react dom production min js asynctogenerator js uncaught in promise typeerror console warning is not a function at ec codeeditor jsx at zo react dom production min js at ss react dom production min js at vl react dom production min js at ml react dom production min js at sl react dom production min js at react dom production min js at t unstable runwithpriority scheduler production min js at vi react dom production min js at yi react dom production min js usewebsocket js websocket is already in closing or closed state if i go back to my awx homepage and go to the notifications screen however i see that the template has still been created i can edit the template fine but if i try and click the template notification templates details i get the same results from above environment awx version awx install method kubernetes ansible version x y z operating system web browser chrome steps to reproduce add a new notification template and fill out all required fields i am working with webhook notifications but it seems this problem persists across all notification types press save on the create notification template screen expected results the notification template to be created and for me to be redirected to its page successfully e g notification templates details actual results i get a blank screen and the javascript errors mentioned above additional information include any links to sosreport database dumps screenshots or other information | 1 |
718,993 | 24,740,730,846 | IssuesEvent | 2022-10-21 04:44:05 | AY2223S1-CS2103T-W10-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-W10-4/tp | closed | feat(tasks): Edit module associated to task | type.Story priority.High | # User Story
Tasks/Deadlines User Story 11
As a user, I can change the module of a specific task or deadline, so that I can move the tasks around if I make a mistake. | 1.0 | feat(tasks): Edit module associated to task - # User Story
Tasks/Deadlines User Story 11
As a user, I can change the module of a specific task or deadline, so that I can move the tasks around if I make a mistake. | priority | feat tasks edit module associated to task user story tasks deadlines user story as a user i can change the module of a specific task or deadline so that i can move the tasks around if i make a mistake | 1 |
145,331 | 5,565,001,028 | IssuesEvent | 2017-03-26 10:00:09 | Caleydo/mothertable | https://api.github.com/repos/Caleydo/mothertable | opened | Brush only works from top to bottom | high priority | Should also work when the user starts to brush an element and the drags upwards. | 1.0 | Brush only works from top to bottom - Should also work when the user starts to brush an element and the drags upwards. | priority | brush only works from top to bottom should also work when the user starts to brush an element and the drags upwards | 1 |
721,181 | 24,820,465,801 | IssuesEvent | 2022-10-25 16:02:07 | AY2223S1-CS2113-T17-4/tp | https://api.github.com/repos/AY2223S1-CS2113-T17-4/tp | closed | Create default class | type.Bug priority.High severity.High | Currently the backup .json doesn't get loaded into the .jar file. A `Defaults` class can be made to store this data, and when fetching has failed, can create a backup .json in the file directory. | 1.0 | Create default class - Currently the backup .json doesn't get loaded into the .jar file. A `Defaults` class can be made to store this data, and when fetching has failed, can create a backup .json in the file directory. | priority | create default class currently the backup json doesn t get loaded into the jar file a defaults class can be made to store this data and when fetching has failed can create a backup json in the file directory | 1 |
328,984 | 10,010,768,026 | IssuesEvent | 2019-07-15 08:54:15 | IATI/ckanext-iati | https://api.github.com/repos/IATI/ckanext-iati | closed | Many error messages on the Registry | High priority bug | Hi,
I tried to purge the Registry today but received a timeout error, and it didn't compete.
This data file is also showing a fair few error messages. Not sure if it's related to the purging or not? https://iatiregistry.org/dataset/ec-fpi-88

| 1.0 | Many error messages on the Registry - Hi,
I tried to purge the Registry today but received a timeout error, and it didn't compete.
This data file is also showing a fair few error messages. Not sure if it's related to the purging or not? https://iatiregistry.org/dataset/ec-fpi-88

| priority | many error messages on the registry hi i tried to purge the registry today but received a timeout error and it didn t compete this data file is also showing a fair few error messages not sure if it s related to the purging or not | 1 |
103,049 | 4,164,299,707 | IssuesEvent | 2016-06-18 17:58:29 | ALitttleBitDifferent/AmbientPrologueBugs | https://api.github.com/repos/ALitttleBitDifferent/AmbientPrologueBugs | opened | Camera not centered on Krini during the fight. | bug High Priority | The camera turned sideways during the fight. It might have been caused by trying to focus on Krini while at the same time trying to rotate it. | 1.0 | Camera not centered on Krini during the fight. - The camera turned sideways during the fight. It might have been caused by trying to focus on Krini while at the same time trying to rotate it. | priority | camera not centered on krini during the fight the camera turned sideways during the fight it might have been caused by trying to focus on krini while at the same time trying to rotate it | 1 |
125,230 | 4,954,559,903 | IssuesEvent | 2016-12-01 17:56:01 | lgblgblgb/xemu | https://api.github.com/repos/lgblgblgb/xemu | closed | Mega-65: colour RAM handling bug sometimes | bug HIGH PRIORITY MEGA65 work in progress | It seems, M65 emulator has an issue with colour RAM, sometimes wrong information is read. For example by scrolling the screen, the C65 "colour stripes" disappears as being colour on screen. The problem is given by the fact (it seems) that I use tricks to manage colour RAM at multiple places to speed up the more frequent reads (compared to writes). This should be fixed! Though I noticed now about the DMA changes, it's not DMA related, but an older bug in the general address decoding part. | 1.0 | Mega-65: colour RAM handling bug sometimes - It seems, M65 emulator has an issue with colour RAM, sometimes wrong information is read. For example by scrolling the screen, the C65 "colour stripes" disappears as being colour on screen. The problem is given by the fact (it seems) that I use tricks to manage colour RAM at multiple places to speed up the more frequent reads (compared to writes). This should be fixed! Though I noticed now about the DMA changes, it's not DMA related, but an older bug in the general address decoding part. | priority | mega colour ram handling bug sometimes it seems emulator has an issue with colour ram sometimes wrong information is read for example by scrolling the screen the colour stripes disappears as being colour on screen the problem is given by the fact it seems that i use tricks to manage colour ram at multiple places to speed up the more frequent reads compared to writes this should be fixed though i noticed now about the dma changes it s not dma related but an older bug in the general address decoding part | 1 |
471,774 | 13,610,124,020 | IssuesEvent | 2020-09-23 06:49:17 | GluuFederation/oxauth-config | https://api.github.com/repos/GluuFederation/oxauth-config | closed | OAuth - UMA Scopes | Priority-HIGH User Story | # Description
This endpoint can be used to create, view, add, search, partially update, complete update as well as deletion of a UMA scope.
# Endpoint
https://<servername:port>/api/v1/oxauth/uma/scopes/{inum}
# Supported methods
1. **GET**: Gets all UMA scope or optional can search for UMA scopes based on pattern.
2. **GET/{id}**: Gets an UMA scope based on unique identifier.
3. **POST**: Create a new UMA scope
4. **PUT**: Updates an existing UMA scope
5. **PATCH**: Patches an existing UMA scope
6. **DELETE**: Deletes an UMA scope.
# Data types exchanged
1. **Consumes**: JSON
1. **Produces**: JSON
# Acceptance Criteria
- API should handle error scenarios and throw appropriate exception.
- The API consumer must have required permission to access the endpoint.
- Operation should be successful.
# Swagger Spec
https://gluu.org/swagger-ui/?url=https://raw.githubusercontent.com/GluuFederation/oxauth-config/master/docs/oxauth-config-swagger.yaml#/OAuth_-_UMA_Scopes
____ | 1.0 | OAuth - UMA Scopes - # Description
This endpoint can be used to create, view, add, search, partially update, complete update as well as deletion of a UMA scope.
# Endpoint
https://<servername:port>/api/v1/oxauth/uma/scopes/{inum}
# Supported methods
1. **GET**: Gets all UMA scope or optional can search for UMA scopes based on pattern.
2. **GET/{id}**: Gets an UMA scope based on unique identifier.
3. **POST**: Create a new UMA scope
4. **PUT**: Updates an existing UMA scope
5. **PATCH**: Patches an existing UMA scope
6. **DELETE**: Deletes an UMA scope.
# Data types exchanged
1. **Consumes**: JSON
1. **Produces**: JSON
# Acceptance Criteria
- API should handle error scenarios and throw appropriate exception.
- The API consumer must have required permission to access the endpoint.
- Operation should be successful.
# Swagger Spec
https://gluu.org/swagger-ui/?url=https://raw.githubusercontent.com/GluuFederation/oxauth-config/master/docs/oxauth-config-swagger.yaml#/OAuth_-_UMA_Scopes
____ | priority | oauth uma scopes description this endpoint can be used to create view add search partially update complete update as well as deletion of a uma scope endpoint supported methods get gets all uma scope or optional can search for uma scopes based on pattern get id gets an uma scope based on unique identifier post create a new uma scope put updates an existing uma scope patch patches an existing uma scope delete deletes an uma scope data types exchanged consumes json produces json acceptance criteria api should handle error scenarios and throw appropriate exception the api consumer must have required permission to access the endpoint operation should be successful swagger spec | 1 |
492,377 | 14,201,196,081 | IssuesEvent | 2020-11-16 07:14:41 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.facebook.com - design is broken | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61895 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.facebook.com/login/account_recovery/name_search/?cuid=AYjbFLIBtmuokxxBZdWfVLV909i3L9LyDohZy9VKMVo_idjlD1SW6ZllABb7sfl-hIW8hzZO645yWDCw_oSvOQtiYesIIAjCTyYnA58Br903AYP-PCUjFrJziX1iXSFaVLEq6ZLfS5OfOPSr6cUUaqSvnvYUiJIgSkzIvsbwwqBh-ANQWGdwRZ1YJG_UDLHovD0&errorcode=1348092&flow=initiate_view&ls=initiate_view&refsrc=https%3A%2F%2Fm.facebook.com%2Flogin%2Faccount_recovery%2Fname_search%2F
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Edge
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
i cant open my account...
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/7c9e6fb7-4b3c-4a52-9aa5-101028e7d850.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201108174701</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/b882d1ea-3f07-4be4-840c-d3b035be9893)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.facebook.com - design is broken - <!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61895 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.facebook.com/login/account_recovery/name_search/?cuid=AYjbFLIBtmuokxxBZdWfVLV909i3L9LyDohZy9VKMVo_idjlD1SW6ZllABb7sfl-hIW8hzZO645yWDCw_oSvOQtiYesIIAjCTyYnA58Br903AYP-PCUjFrJziX1iXSFaVLEq6ZLfS5OfOPSr6cUUaqSvnvYUiJIgSkzIvsbwwqBh-ANQWGdwRZ1YJG_UDLHovD0&errorcode=1348092&flow=initiate_view&ls=initiate_view&refsrc=https%3A%2F%2Fm.facebook.com%2Flogin%2Faccount_recovery%2Fname_search%2F
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Edge
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
i cant open my account...
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/7c9e6fb7-4b3c-4a52-9aa5-101028e7d850.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201108174701</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/b882d1ea-3f07-4be4-840c-d3b035be9893)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | design is broken url browser version firefox mobile operating system android tested another browser yes edge problem type design is broken description items are overlapped steps to reproduce i cant open my account view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
525,065 | 15,228,968,323 | IssuesEvent | 2021-02-18 12:15:48 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Spreadsheet copy-paste issues in IE | Bug C: Spreadsheet FP: In Development FP: Unplanned Kendo2 Priority 5 SEV: High | ### Bug report
This is a regression introduced with 2020.1.114.
### Reproduction of the problem
There are three identified scenarios in which copy/paste does not work as expected in IE:
1. Copied cells are retained in Spreadsheet clipboard and pasted altogether in IE: https://github.com/telerik/kendo/issues/11061
2. In IE copied Spreadsheet cells are pasted as value only: https://github.com/telerik/kendo/issues/11062
3. Values from neighboring cells on the same line are pasted in a single cell:
* Run the following Dojo: https://dojo.telerik.com/IcorilOZ/3
* Select cells A1:B1
* Click Ctrl+C
* Select cell A5
* Click Ctrl+V
* Cell content from A1:B1 was pasted into the single-cell A5.
### Expected/desired behavior
Copy-paste in IE should behave like in the other browsers.
### Environment
* **Kendo UI version:** 2020.2.513
* **Browser:** [IE] | 1.0 | Spreadsheet copy-paste issues in IE - ### Bug report
This is a regression introduced with 2020.1.114.
### Reproduction of the problem
There are three identified scenarios in which copy/paste does not work as expected in IE:
1. Copied cells are retained in Spreadsheet clipboard and pasted altogether in IE: https://github.com/telerik/kendo/issues/11061
2. In IE copied Spreadsheet cells are pasted as value only: https://github.com/telerik/kendo/issues/11062
3. Values from neighboring cells on the same line are pasted in a single cell:
* Run the following Dojo: https://dojo.telerik.com/IcorilOZ/3
* Select cells A1:B1
* Click Ctrl+C
* Select cell A5
* Click Ctrl+V
* Cell content from A1:B1 was pasted into the single-cell A5.
### Expected/desired behavior
Copy-paste in IE should behave like in the other browsers.
### Environment
* **Kendo UI version:** 2020.2.513
* **Browser:** [IE] | priority | spreadsheet copy paste issues in ie bug report this is a regression introduced with reproduction of the problem there are three identified scenarios in which copy paste does not work as expected in ie copied cells are retained in spreadsheet clipboard and pasted altogether in ie in ie copied spreadsheet cells are pasted as value only values from neighboring cells on the same line are pasted in a single cell run the following dojo select cells click ctrl c select cell click ctrl v cell content from was pasted into the single cell expected desired behavior copy paste in ie should behave like in the other browsers environment kendo ui version browser | 1 |
231,870 | 7,644,210,954 | IssuesEvent | 2018-05-08 14:52:42 | alces-software/metalware | https://api.github.com/repos/alces-software/metalware | closed | Update `/var/named/*` partially, like with `/etc/hosts` | bug enhancement high priority | Named should replace parts of files with the hosts information, much like how metalware replaces between the `METALWARE START` and `METALWARE END` parts of `/etc/hosts` after adding more nodes.
This is slightly more difficult as the named configuration creates multiple files in `/var/named/` which contain zone configuration as well as host information.
Named would ideally be setup by (primary network used as an example):
1. Creating the zone file after `metal configure domain`
```
$TTL 300
@ IN SOA . nobody.example.com. (
1507302382 ; Serial
600 ; Refresh
1800 ; Retry
604800 ; Expire
300 ; TTL
)
IN NS .
@ IN MX 10 pri.testcluster.cluster.local.
IN NS .
### METALWARE START ###
### METALWARE END ###
```
2. Rendering the hosts in-between the `METALWARE` tags
This may require changes in `metalware-default` as the zone information (from step 1) is included in the zone template file alongside the hosts information (see [`named/forward/default`](https://github.com/alces-software/metalware-default/blob/develop/named/forward/default)) | 1.0 | Update `/var/named/*` partially, like with `/etc/hosts` - Named should replace parts of files with the hosts information, much like how metalware replaces between the `METALWARE START` and `METALWARE END` parts of `/etc/hosts` after adding more nodes.
This is slightly more difficult as the named configuration creates multiple files in `/var/named/` which contain zone configuration as well as host information.
Named would ideally be setup by (primary network used as an example):
1. Creating the zone file after `metal configure domain`
```
$TTL 300
@ IN SOA . nobody.example.com. (
1507302382 ; Serial
600 ; Refresh
1800 ; Retry
604800 ; Expire
300 ; TTL
)
IN NS .
@ IN MX 10 pri.testcluster.cluster.local.
IN NS .
### METALWARE START ###
### METALWARE END ###
```
2. Rendering the hosts in-between the `METALWARE` tags
This may require changes in `metalware-default` as the zone information (from step 1) is included in the zone template file alongside the hosts information (see [`named/forward/default`](https://github.com/alces-software/metalware-default/blob/develop/named/forward/default)) | priority | update var named partially like with etc hosts named should replace parts of files with the hosts information much like how metalware replaces between the metalware start and metalware end parts of etc hosts after adding more nodes this is slightly more difficult as the named configuration creates multiple files in var named which contain zone configuration as well as host information named would ideally be setup by primary network used as an example creating the zone file after metal configure domain ttl in soa nobody example com serial refresh retry expire ttl in ns in mx pri testcluster cluster local in ns metalware start metalware end rendering the hosts in between the metalware tags this may require changes in metalware default as the zone information from step is included in the zone template file alongside the hosts information see | 1 |
575,253 | 17,025,794,374 | IssuesEvent | 2021-07-03 13:31:04 | level73/Monithon | https://api.github.com/repos/level73/Monithon | closed | Preparare possibile visualizzazione in caso di mancati dati delle API | Priority: High | Nei casi in cui non abbiamo dati che provengono dalle API, preparare una visualizzazione alternativa del report. | 1.0 | Preparare possibile visualizzazione in caso di mancati dati delle API - Nei casi in cui non abbiamo dati che provengono dalle API, preparare una visualizzazione alternativa del report. | priority | preparare possibile visualizzazione in caso di mancati dati delle api nei casi in cui non abbiamo dati che provengono dalle api preparare una visualizzazione alternativa del report | 1 |
779,195 | 27,343,072,773 | IssuesEvent | 2023-02-27 00:48:05 | antoinecarme/pyaf | https://api.github.com/repos/antoinecarme/pyaf | closed | RISC-V Hardware Platform Validation | class:enhancement class:Bench priority:high class:devops topic:third_party_support status:in_progress topic:Green | New Hardware Architecture : RISC-V
This is a very experimental platform. PyAF intends to work on a not-yet-fully-manufactured hardware (#176 ).
The Sifive dev board (VisionFive 2) is planned through a kickstart project :
https://www.kickstarter.com/projects/starfive/visionfive-2/
Availablity : Dec. 2022
CPU features :

| 1.0 | RISC-V Hardware Platform Validation - New Hardware Architecture : RISC-V
This is a very experimental platform. PyAF intends to work on a not-yet-fully-manufactured hardware (#176 ).
The Sifive dev board (VisionFive 2) is planned through a kickstart project :
https://www.kickstarter.com/projects/starfive/visionfive-2/
Availablity : Dec. 2022
CPU features :

| priority | risc v hardware platform validation new hardware architecture risc v this is a very experimental platform pyaf intends to work on a not yet fully manufactured hardware the sifive dev board visionfive is planned through a kickstart project availablity dec cpu features | 1 |
138,748 | 5,346,360,047 | IssuesEvent | 2017-02-17 19:27:47 | phetsims/circuit-construction-kit-black-box-study | https://api.github.com/repos/phetsims/circuit-construction-kit-black-box-study | closed | Create new stable SHAs | priority:2-high status:on-hold type:question | From https://github.com/phetsims/circuit-construction-kit-common/issues/72#issuecomment-277304504
Master has diverged so much from the legacy branch that it is no longer to automatically or semi-automatically merge the changes. All changes now have to be done manually, this means extra work during development and that everything must be tested twice (in the branch and in master). The best long term plan is to abandon the legacy branch and create a new stable branch from master--this would require one-time exhaustive testing but would save us the most time in the long run. Every time we say "what if there is just one more minor change for the stale branch" there are many more changes for the stale branch. So we should plan to stop making changes to this branch soon, whether (a) it requires no more changes or (b) we can use a fresh branch from master.
When we do this will be up to @arouinfar @kathy-phet and @ariel-phet. | 1.0 | Create new stable SHAs - From https://github.com/phetsims/circuit-construction-kit-common/issues/72#issuecomment-277304504
Master has diverged so much from the legacy branch that it is no longer to automatically or semi-automatically merge the changes. All changes now have to be done manually, this means extra work during development and that everything must be tested twice (in the branch and in master). The best long term plan is to abandon the legacy branch and create a new stable branch from master--this would require one-time exhaustive testing but would save us the most time in the long run. Every time we say "what if there is just one more minor change for the stale branch" there are many more changes for the stale branch. So we should plan to stop making changes to this branch soon, whether (a) it requires no more changes or (b) we can use a fresh branch from master.
When we do this will be up to @arouinfar @kathy-phet and @ariel-phet. | priority | create new stable shas from master has diverged so much from the legacy branch that it is no longer to automatically or semi automatically merge the changes all changes now have to be done manually this means extra work during development and that everything must be tested twice in the branch and in master the best long term plan is to abandon the legacy branch and create a new stable branch from master this would require one time exhaustive testing but would save us the most time in the long run every time we say what if there is just one more minor change for the stale branch there are many more changes for the stale branch so we should plan to stop making changes to this branch soon whether a it requires no more changes or b we can use a fresh branch from master when we do this will be up to arouinfar kathy phet and ariel phet | 1 |
789,743 | 27,804,534,478 | IssuesEvent | 2023-03-17 18:34:56 | Apicurio/apicurio-registry | https://api.github.com/repos/Apicurio/apicurio-registry | closed | Unable to download artifacts with apicurio-registry-maven-plugin (RESTEASY003635) | Bug component/registry priority/high | Hey folks,
I try to use the Apicurio Maven plugin to download AVRO schema files. The registry is behind a Keycloak, so I specified that as auth server. But still, I am getting this exception:
```
[INFO] --- apicurio-registry-maven-plugin:2.1.5.Final:download (default) @ order-import-service ---
[INFO] Downloading artifact [MyNamespace] / [MyAvroSchema] (version null).
[ERROR] Exception while downloading artifact [MyNamespace] / [MyAvroSchema]
io.apicurio.rest.client.auth.exception.AuthException: {"error":"RESTEASY003635: No match for accept header"}
at io.apicurio.rest.client.auth.exception.AuthErrorHandler.handleErrorResponse(AuthErrorHandler.java:23)
at io.apicurio.rest.client.handler.BodyHandler.lambda$toSupplierOfType$1(BodyHandler.java:46)
at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:190)
at io.apicurio.rest.client.auth.OidcAuth.requestAccessToken(OidcAuth.java:71)
at io.apicurio.rest.client.auth.OidcAuth.apply(OidcAuth.java:59)
at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:163)
at io.apicurio.registry.rest.client.impl.RegistryClientImpl.getLatestArtifact(RegistryClientImpl.java:76)
at io.apicurio.registry.maven.DownloadRegistryMojo.executeInternal(DownloadRegistryMojo.java:97)
at io.apicurio.registry.maven.AbstractRegistryMojo.execute(AbstractRegistryMojo.java:95)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute(MojoExecutor.java:271)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:196)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:160)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.mvndaemon.mvnd.builder.SmartBuilderImpl.buildProject(SmartBuilderImpl.java:178)
at org.mvndaemon.mvnd.builder.SmartBuilderImpl$ProjectBuildTask.run(SmartBuilderImpl.java:198)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
```
This is how I configured the plugin in `pom.xml`:
```
<plugin>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-maven-plugin</artifactId>
<version>2.1.5.Final</version>
<executions>
<execution>
<phase>generate-sources</phase>
<goals>
<goal>download</goal>
</goals>
<configuration>
<registryUrl>https://my-registry-url/apis/registry/v2</registryUrl>
<authServerUrl>KEYCLOAK_URL</authServerUrl>
<clientId>KEYCLOAK_USER</clientId>
<clientSecret>KEYCLOAK_SECRET</clientSecret>
<artifacts>
<artifact>
<groupId>MyNamespace</groupId>
<artifactId>MyAvroSchema</artifactId>
<file>${project.build.directory}/downloaded-sources/avro/MyAvroSchema.avsc</file>
<overwrite>true</overwrite>
</artifact>
</artifacts>
</configuration>
</execution>
</executions>
</plugin>
```
Any idea why this is happening? Unfortunately, I cannot see which exact request is failing.
Cheers,
Stefan
Edit: I am running v 2.2.1.Final of Apicurio:
```
{
"name": "Apicurio Registry (SQL)",
"description": "High performance, runtime registry for schemas and API designs.",
"version": "2.2.1.Final",
"builtOn": "2022-03-02T17:38:33Z"
}
``` | 1.0 | Unable to download artifacts with apicurio-registry-maven-plugin (RESTEASY003635) - Hey folks,
I try to use the Apicurio Maven plugin to download AVRO schema files. The registry is behind a Keycloak, so I specified that as auth server. But still, I am getting this exception:
```
[INFO] --- apicurio-registry-maven-plugin:2.1.5.Final:download (default) @ order-import-service ---
[INFO] Downloading artifact [MyNamespace] / [MyAvroSchema] (version null).
[ERROR] Exception while downloading artifact [MyNamespace] / [MyAvroSchema]
io.apicurio.rest.client.auth.exception.AuthException: {"error":"RESTEASY003635: No match for accept header"}
at io.apicurio.rest.client.auth.exception.AuthErrorHandler.handleErrorResponse(AuthErrorHandler.java:23)
at io.apicurio.rest.client.handler.BodyHandler.lambda$toSupplierOfType$1(BodyHandler.java:46)
at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:190)
at io.apicurio.rest.client.auth.OidcAuth.requestAccessToken(OidcAuth.java:71)
at io.apicurio.rest.client.auth.OidcAuth.apply(OidcAuth.java:59)
at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:163)
at io.apicurio.registry.rest.client.impl.RegistryClientImpl.getLatestArtifact(RegistryClientImpl.java:76)
at io.apicurio.registry.maven.DownloadRegistryMojo.executeInternal(DownloadRegistryMojo.java:97)
at io.apicurio.registry.maven.AbstractRegistryMojo.execute(AbstractRegistryMojo.java:95)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute(MojoExecutor.java:271)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:196)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:160)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.mvndaemon.mvnd.builder.SmartBuilderImpl.buildProject(SmartBuilderImpl.java:178)
at org.mvndaemon.mvnd.builder.SmartBuilderImpl$ProjectBuildTask.run(SmartBuilderImpl.java:198)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
```
This is how I configured the plugin in `pom.xml`:
```
<plugin>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-maven-plugin</artifactId>
<version>2.1.5.Final</version>
<executions>
<execution>
<phase>generate-sources</phase>
<goals>
<goal>download</goal>
</goals>
<configuration>
<registryUrl>https://my-registry-url/apis/registry/v2</registryUrl>
<authServerUrl>KEYCLOAK_URL</authServerUrl>
<clientId>KEYCLOAK_USER</clientId>
<clientSecret>KEYCLOAK_SECRET</clientSecret>
<artifacts>
<artifact>
<groupId>MyNamespace</groupId>
<artifactId>MyAvroSchema</artifactId>
<file>${project.build.directory}/downloaded-sources/avro/MyAvroSchema.avsc</file>
<overwrite>true</overwrite>
</artifact>
</artifacts>
</configuration>
</execution>
</executions>
</plugin>
```
Any idea why this is happening? Unfortunately, I cannot see which exact request is failing.
Cheers,
Stefan
Edit: I am running v 2.2.1.Final of Apicurio:
```
{
"name": "Apicurio Registry (SQL)",
"description": "High performance, runtime registry for schemas and API designs.",
"version": "2.2.1.Final",
"builtOn": "2022-03-02T17:38:33Z"
}
``` | priority | unable to download artifacts with apicurio registry maven plugin hey folks i try to use the apicurio maven plugin to download avro schema files the registry is behind a keycloak so i specified that as auth server but still i am getting this exception apicurio registry maven plugin final download default order import service downloading artifact version null exception while downloading artifact io apicurio rest client auth exception authexception error no match for accept header at io apicurio rest client auth exception autherrorhandler handleerrorresponse autherrorhandler java at io apicurio rest client handler bodyhandler lambda tosupplieroftype bodyhandler java at io apicurio rest client jdkhttpclient sendrequest jdkhttpclient java at io apicurio rest client auth oidcauth requestaccesstoken oidcauth java at io apicurio rest client auth oidcauth apply oidcauth java at io apicurio rest client jdkhttpclient sendrequest jdkhttpclient java at io apicurio registry rest client impl registryclientimpl getlatestartifact registryclientimpl java at io apicurio registry maven downloadregistrymojo executeinternal downloadregistrymojo java at io apicurio registry maven abstractregistrymojo execute abstractregistrymojo java at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java at org apache maven lifecycle internal mojoexecutor doexecute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org mvndaemon mvnd builder smartbuilderimpl buildproject smartbuilderimpl java at org mvndaemon mvnd builder smartbuilderimpl projectbuildtask run smartbuilderimpl java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java this is how i configured the plugin in pom xml io apicurio apicurio registry maven plugin final generate sources download keycloak url keycloak user keycloak secret mynamespace myavroschema project build directory downloaded sources avro myavroschema avsc true any idea why this is happening unfortunately i cannot see which exact request is failing cheers stefan edit i am running v final of apicurio name apicurio registry sql description high performance runtime registry for schemas and api designs version final builton | 1 |
608,593 | 18,843,232,825 | IssuesEvent | 2021-11-11 12:06:38 | amosproj/amos2021ws05-fin-prod-port-quick-check | https://api.github.com/repos/amosproj/amos2021ws05-fin-prod-port-quick-check | opened | Epic Create Project | priority: high Epic: Create Project | ## User story
1. As a consultant
2. I need to be able to create a project
3. So that I evaluate them
## Acceptance criteria
* Create View "Project Overview"
* Create View "Manage Project"
* Create View "Manage Project Member"
* Must be done
## Definition of done
* Added only after week 5
* The same for all features
| 1.0 | Epic Create Project - ## User story
1. As a consultant
2. I need to be able to create a project
3. So that I evaluate them
## Acceptance criteria
* Create View "Project Overview"
* Create View "Manage Project"
* Create View "Manage Project Member"
* Must be done
## Definition of done
* Added only after week 5
* The same for all features
| priority | epic create project user story as a consultant i need to be able to create a project so that i evaluate them acceptance criteria create view project overview create view manage project create view manage project member must be done definition of done added only after week the same for all features | 1 |
39,642 | 2,857,875,736 | IssuesEvent | 2015-06-02 21:57:04 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Server log error: Null value in createdate for DataFile | Component: File Upload & Handling Priority: High Status: QA Type: Bug | Saw this in the server log at 60ox. Not sure whether it causes any user observable symptoms.
There are a bunch of files ~10 or more, grep "Failing row contains" server.log*
[2015-04-21T21:26:43.102-0400] [glassfish 4.1] [WARNING] [] [javax.enterprise.resource.jta.com.sun.enterprise.transaction] [tid: _ThreadID=51 _ThreadName=jk-connector(5)] [timeMillis: 1429666003102] [levelValue: 900] [[
DTX5014: Caught exception in beforeCompletion() callback:
javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: null value in column "createdate" violates not-null constraint
Detail: Failing row contains (2668843, DataFile, null, null, 2015-04-21 21:26:42.788, null, 2015-04-21 21:21:37.347, null, null, 64805, null).
Error Code: 0
Call: INSERT INTO DVOBJECT (CREATEDATE, INDEXTIME, MODIFICATIONTIME, PERMISSIONINDEXTIME, PERMISSIONMODIFICATIONTIME, PUBLICATIONDATE, CREATOR_ID, OWNER_ID, RELEASEUSER_ID, DTYPE) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
bind => [10 parameters bound]
Query: InsertObjectQuery([DataFile id:null name:null])
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl$1.handleException(EntityManagerSetupImpl.java:696)
at org.eclipse.persistence.transaction.AbstractSynchronizationListener.handleException(AbstractSynchronizationListener.java:275)
at org.eclipse.persistence.transaction.AbstractSynchronizationListener.beforeCompletion(AbstractSynchronizationListener.java:170)
at org.eclipse.persistence.transaction.JTASynchronizationListener.beforeCompletion(JTASynchronizationListener.java:68)
at com.sun.enterprise.transaction.JavaEETransactionImpl.commit(JavaEETransactionImpl.java:452)
at com.sun.enterprise.transaction.JavaEETransactionManagerSimplified.commit(JavaEETransactionManagerSimplified.java:854)
at com.sun.ejb.containers.EJBContainerTransactionManager.completeNewTx(EJBContainerTransactionManager.java:719)
at com.sun.ejb.containers.EJBContainerTransactionManager.postInvokeTx(EJBContainerTransactionManager.java:503)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:4566)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2074)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2044)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:220)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:88)
at com.sun.proxy.$Proxy436.submit(Unknown Source)
at edu.harvard.iq.dataverse.__EJB31_Generated__EjbDataverseEngine__Intf____Bean__.submit(Unknown Source)
at edu.harvard.iq.dataverse.DatasetPage.save(DatasetPage.java:1808)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.sun.el.parser.AstValue.invoke(AstValue.java:289)
at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:304)
at org.jboss.weld.util.el.ForwardingMethodExpression.invoke(ForwardingMethodExpression.java:40)
at org.jboss.weld.el.WeldMethodExpression.invoke(WeldMethodExpression.java:50)
at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105)
at javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:87)
at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102)
at javax.faces.component.UICommand.broadcast(UICommand.java:315)
at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:790)
at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1282)
at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:81)
at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101)
at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:198)
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:646)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.glassfish.tyrus.servlet.TyrusServletFilter.doFilter(TyrusServletFilter.java:295)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.ocpsoft.rewrite.servlet.RewriteFilter.doFilter(RewriteFilter.java:205)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:412)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
| 1.0 | Server log error: Null value in createdate for DataFile - Saw this in the server log at 60ox. Not sure whether it causes any user observable symptoms.
There are a bunch of files ~10 or more, grep "Failing row contains" server.log*
[2015-04-21T21:26:43.102-0400] [glassfish 4.1] [WARNING] [] [javax.enterprise.resource.jta.com.sun.enterprise.transaction] [tid: _ThreadID=51 _ThreadName=jk-connector(5)] [timeMillis: 1429666003102] [levelValue: 900] [[
DTX5014: Caught exception in beforeCompletion() callback:
javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: null value in column "createdate" violates not-null constraint
Detail: Failing row contains (2668843, DataFile, null, null, 2015-04-21 21:26:42.788, null, 2015-04-21 21:21:37.347, null, null, 64805, null).
Error Code: 0
Call: INSERT INTO DVOBJECT (CREATEDATE, INDEXTIME, MODIFICATIONTIME, PERMISSIONINDEXTIME, PERMISSIONMODIFICATIONTIME, PUBLICATIONDATE, CREATOR_ID, OWNER_ID, RELEASEUSER_ID, DTYPE) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
bind => [10 parameters bound]
Query: InsertObjectQuery([DataFile id:null name:null])
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl$1.handleException(EntityManagerSetupImpl.java:696)
at org.eclipse.persistence.transaction.AbstractSynchronizationListener.handleException(AbstractSynchronizationListener.java:275)
at org.eclipse.persistence.transaction.AbstractSynchronizationListener.beforeCompletion(AbstractSynchronizationListener.java:170)
at org.eclipse.persistence.transaction.JTASynchronizationListener.beforeCompletion(JTASynchronizationListener.java:68)
at com.sun.enterprise.transaction.JavaEETransactionImpl.commit(JavaEETransactionImpl.java:452)
at com.sun.enterprise.transaction.JavaEETransactionManagerSimplified.commit(JavaEETransactionManagerSimplified.java:854)
at com.sun.ejb.containers.EJBContainerTransactionManager.completeNewTx(EJBContainerTransactionManager.java:719)
at com.sun.ejb.containers.EJBContainerTransactionManager.postInvokeTx(EJBContainerTransactionManager.java:503)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:4566)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2074)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2044)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:220)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:88)
at com.sun.proxy.$Proxy436.submit(Unknown Source)
at edu.harvard.iq.dataverse.__EJB31_Generated__EjbDataverseEngine__Intf____Bean__.submit(Unknown Source)
at edu.harvard.iq.dataverse.DatasetPage.save(DatasetPage.java:1808)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.sun.el.parser.AstValue.invoke(AstValue.java:289)
at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:304)
at org.jboss.weld.util.el.ForwardingMethodExpression.invoke(ForwardingMethodExpression.java:40)
at org.jboss.weld.el.WeldMethodExpression.invoke(WeldMethodExpression.java:50)
at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105)
at javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:87)
at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102)
at javax.faces.component.UICommand.broadcast(UICommand.java:315)
at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:790)
at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1282)
at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:81)
at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101)
at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:198)
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:646)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.glassfish.tyrus.servlet.TyrusServletFilter.doFilter(TyrusServletFilter.java:295)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.ocpsoft.rewrite.servlet.RewriteFilter.doFilter(RewriteFilter.java:205)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:412)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
| priority | server log error null value in createdate for datafile saw this in the server log at not sure whether it causes any user observable symptoms there are a bunch of files or more grep failing row contains server log caught exception in beforecompletion callback javax persistence persistenceexception exception eclipse persistence services org eclipse persistence exceptions databaseexception internal exception org postgresql util psqlexception error null value in column createdate violates not null constraint detail failing row contains datafile null null null null null null error code call insert into dvobject createdate indextime modificationtime permissionindextime permissionmodificationtime publicationdate creator id owner id releaseuser id dtype values bind query insertobjectquery at org eclipse persistence internal jpa entitymanagersetupimpl handleexception entitymanagersetupimpl java at org eclipse persistence transaction abstractsynchronizationlistener handleexception abstractsynchronizationlistener java at org eclipse persistence transaction abstractsynchronizationlistener beforecompletion abstractsynchronizationlistener java at org eclipse persistence transaction jtasynchronizationlistener beforecompletion jtasynchronizationlistener java at com sun enterprise transaction javaeetransactionimpl commit javaeetransactionimpl java at com sun enterprise transaction javaeetransactionmanagersimplified commit javaeetransactionmanagersimplified java at com sun ejb containers ejbcontainertransactionmanager completenewtx ejbcontainertransactionmanager java at com sun ejb containers ejbcontainertransactionmanager postinvoketx ejbcontainertransactionmanager java at com sun ejb containers basecontainer postinvoketx basecontainer java at com sun ejb containers basecontainer postinvoke basecontainer java at com sun ejb containers basecontainer postinvoke basecontainer java at com sun ejb containers ejblocalobjectinvocationhandler invoke ejblocalobjectinvocationhandler java at com sun ejb containers ejblocalobjectinvocationhandlerdelegate invoke ejblocalobjectinvocationhandlerdelegate java at com sun proxy submit unknown source at edu harvard iq dataverse generated ejbdataverseengine intf bean submit unknown source at edu harvard iq dataverse datasetpage save datasetpage java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com sun el parser astvalue invoke astvalue java at com sun el methodexpressionimpl invoke methodexpressionimpl java at org jboss weld util el forwardingmethodexpression invoke forwardingmethodexpression java at org jboss weld el weldmethodexpression invoke weldmethodexpression java at com sun faces facelets el tagmethodexpression invoke tagmethodexpression java at javax faces component methodbindingmethodexpressionadapter invoke methodbindingmethodexpressionadapter java at com sun faces application actionlistenerimpl processaction actionlistenerimpl java at javax faces component uicommand broadcast uicommand java at javax faces component uiviewroot broadcastevents uiviewroot java at javax faces component uiviewroot processapplication uiviewroot java at com sun faces lifecycle invokeapplicationphase execute invokeapplicationphase java at com sun faces lifecycle phase dophase phase java at com sun faces lifecycle lifecycleimpl execute lifecycleimpl java at javax faces webapp facesservlet service facesservlet java at org apache catalina core standardwrapper service standardwrapper java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org glassfish tyrus servlet tyrusservletfilter dofilter tyrusservletfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org ocpsoft rewrite servlet rewritefilter dofilter rewritefilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline invoke standardpipeline java at com sun enterprise web webpipeline invoke webpipeline java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline invoke standardpipeline java at org apache catalina connector coyoteadapter doservice coyoteadapter java at org apache catalina connector coyoteadapter service coyoteadapter java at com sun enterprise services impl containermapper httphandlercallable call containermapper java at com sun enterprise services impl containermapper service containermapper java at org glassfish grizzly http server httphandler runservice httphandler java at org glassfish grizzly http server httphandler dohandle httphandler java at org glassfish grizzly http server httpserverfilter handleread httpserverfilter java at org glassfish grizzly filterchain executorresolver execute executorresolver java at org glassfish grizzly filterchain defaultfilterchain executefilter defaultfilterchain java at org glassfish grizzly filterchain defaultfilterchain executechainpart defaultfilterchain java at org glassfish grizzly filterchain defaultfilterchain execute defaultfilterchain java at org glassfish grizzly filterchain defaultfilterchain process defaultfilterchain java at org glassfish grizzly processorexecutor execute processorexecutor java at org glassfish grizzly nio transport tcpniotransport fireioevent tcpniotransport java | 1 |
582,690 | 17,367,797,987 | IssuesEvent | 2021-07-30 09:43:51 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to add Yoast custom breadcrumbs title support in amp. | NEXT UPDATE [Priority: HIGH] bug | Ref:https://secure.helpscout.net/conversation/1576761717/207067?folderId=2770543
Custom breadcrumbs title option is available Yoast Premium.
See this screenshot:https://monosnap.com/file/vNoLNdt2Svmkff8uAoizDx92ISNrWk
| 1.0 | Need to add Yoast custom breadcrumbs title support in amp. - Ref:https://secure.helpscout.net/conversation/1576761717/207067?folderId=2770543
Custom breadcrumbs title option is available Yoast Premium.
See this screenshot:https://monosnap.com/file/vNoLNdt2Svmkff8uAoizDx92ISNrWk
| priority | need to add yoast custom breadcrumbs title support in amp ref custom breadcrumbs title option is available yoast premium see this screenshot | 1 |
535,661 | 15,696,035,294 | IssuesEvent | 2021-03-26 01:01:36 | AbirAahammed/E-card | https://api.github.com/repos/AbirAahammed/E-card | closed | Add firebase support for app data storage and authenticated login | High Priority Sub-user story | - need to add the right dependency for firebase support
- useful for firebase authentication
- useful for using firebase database
Time taken: ~5 days (including a full setup + research on the topic) | 1.0 | Add firebase support for app data storage and authenticated login - - need to add the right dependency for firebase support
- useful for firebase authentication
- useful for using firebase database
Time taken: ~5 days (including a full setup + research on the topic) | priority | add firebase support for app data storage and authenticated login need to add the right dependency for firebase support useful for firebase authentication useful for using firebase database time taken days including a full setup research on the topic | 1 |
349,519 | 10,470,697,350 | IssuesEvent | 2019-09-23 05:06:55 | AY1920S1-CS2113T-W12-3/main | https://api.github.com/repos/AY1920S1-CS2113T-W12-3/main | opened | As a sportsman, I can view the current number of people at the gym (gym requires tap in/tap out feature) | priority.High type.Story | so that I can decide if I want to go to the gym later. | 1.0 | As a sportsman, I can view the current number of people at the gym (gym requires tap in/tap out feature) - so that I can decide if I want to go to the gym later. | priority | as a sportsman i can view the current number of people at the gym gym requires tap in tap out feature so that i can decide if i want to go to the gym later | 1 |
530,892 | 15,437,856,734 | IssuesEvent | 2021-03-07 18:14:26 | AY2021S2-CS2113T-F08-4/tp | https://api.github.com/repos/AY2021S2-CS2113T-F08-4/tp | closed | Implement classes for task types | priority.High type.Task | create classes for task types: task, assignment, midterm and final. | 1.0 | Implement classes for task types - create classes for task types: task, assignment, midterm and final. | priority | implement classes for task types create classes for task types task assignment midterm and final | 1 |
514,867 | 14,945,739,520 | IssuesEvent | 2021-01-26 04:58:40 | ProjectSidewalk/SidewalkWebpage | https://api.github.com/repos/ProjectSidewalk/SidewalkWebpage | closed | Sidewalk Gallery: Show 3-4 cards per row dynamically | Priority: High sidewalkgallery ui-update | We want to show 3 in a row or 4 in a row (we will try them out) for the gallery view. As a result, cards will likely need to be responsive and dynamically sized.
| 1.0 | Sidewalk Gallery: Show 3-4 cards per row dynamically - We want to show 3 in a row or 4 in a row (we will try them out) for the gallery view. As a result, cards will likely need to be responsive and dynamically sized.
| priority | sidewalk gallery show cards per row dynamically we want to show in a row or in a row we will try them out for the gallery view as a result cards will likely need to be responsive and dynamically sized | 1 |
444,094 | 12,806,229,297 | IssuesEvent | 2020-07-03 09:03:29 | OpenSRP/opensrp-plan-evaluator | https://api.github.com/repos/OpenSRP/opensrp-plan-evaluator | closed | Add an Interface to TaskDAO for checking if a task if already generated | Dynamic Tasking High Priority | - [ ] Add an Interface on TaskDAO for checking is task already exists.
- [ ] The method should take in the entityId, planIdentifier,code as params and should return a boolean
- [ ] Implement this interface of both client core and server core
- [ ] When checking if a task exists exclude tasks with status ARCHIVED or CANCELLED
- [ ] Invoke this before task generation and skip task generation if the task exists
| 1.0 | Add an Interface to TaskDAO for checking if a task if already generated - - [ ] Add an Interface on TaskDAO for checking is task already exists.
- [ ] The method should take in the entityId, planIdentifier,code as params and should return a boolean
- [ ] Implement this interface of both client core and server core
- [ ] When checking if a task exists exclude tasks with status ARCHIVED or CANCELLED
- [ ] Invoke this before task generation and skip task generation if the task exists
| priority | add an interface to taskdao for checking if a task if already generated add an interface on taskdao for checking is task already exists the method should take in the entityid planidentifier code as params and should return a boolean implement this interface of both client core and server core when checking if a task exists exclude tasks with status archived or cancelled invoke this before task generation and skip task generation if the task exists | 1 |
322,735 | 9,828,242,862 | IssuesEvent | 2019-06-15 09:40:35 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to add domain when we're using redux for translating the field | [Priority: HIGH] bug | We need to add the domain when declaring the redux array. So, that user can translate the redux with WPML.
ref: https://secure.helpscout.net/conversation/629316836/31032?folderId=2206768
example:
Screenshot > http://take.ms/EQdmr
Here, we're just calling the $redux_builder_amp array. Instead of this, we need to call like this > http://take.ms/ykRbm | 1.0 | Need to add domain when we're using redux for translating the field - We need to add the domain when declaring the redux array. So, that user can translate the redux with WPML.
ref: https://secure.helpscout.net/conversation/629316836/31032?folderId=2206768
example:
Screenshot > http://take.ms/EQdmr
Here, we're just calling the $redux_builder_amp array. Instead of this, we need to call like this > http://take.ms/ykRbm | priority | need to add domain when we re using redux for translating the field we need to add the domain when declaring the redux array so that user can translate the redux with wpml ref example screenshot here we re just calling the redux builder amp array instead of this we need to call like this | 1 |
291,529 | 8,926,395,413 | IssuesEvent | 2019-01-22 04:02:32 | prettier/prettier | https://api.github.com/repos/prettier/prettier | closed | MDX parser cannot incorrectly parses React components used within markdown syntax in paragraphs | lang:mdx priority:high status:has pr type:bug | This isn't a regression in `1.16.0`. I was doing this before today's release.
**Prettier 1.16.0**
[Playground link](https://prettier.io/playground/#N4Igxg9gdgLgprEAuEcAeAHCAnGACSKAZ3wCEIAbAEzirwF48AKASgYD4AdKTmAHhLZoAc3YAjSjTrw0-APSCRXHlAAqACwCWRPNrwBDKHnT6Athgpw8EAGYGCEc9AT4A+mLiaowvAFcitLpGRI5WmjD6FJpgeKb62ADWVBAA7kYp4ep4IaZWfOTUgXLsADR43IZ0+joAnhC+BIbZcHCu1lBgVnUNYBQQAXgw6mERUWBEZeF4GRQUeGLYcPoJQZDYi2AwFDXTw0YYizAwmnDYeDY4cTA64QB0ICUgEBjH0ETIoPFCKQAK8QjvFD6ABuEE0VAeIAW+jACTgMAAyhgYV5hMgYNhfHBHuoYKYKAB1LTwIjIzoIgHhTTA8I1ZDgIjvR5eAK4H7YfTCOLIGyRAKPABWRDQpA5sPhCLMcAAMl44Dy+diQEK0AjUZYAIq+CDwBUUfkgZHYVn00xUNCQg5eGAE8FDZAADgADI8Dv04ASORh6Qc4KzgfLHosAI6+TSLdmc7lIXn6pUBUyaPUGojquBanXypAYrGPCJiW1Ue1IABMeY5mii3gAwo5oyAoM5If44Kp9GJAbGAgBfbtAA)
```sh
--parser mdx
--no-semi
--single-quote
```
**Input:**
```mdx
export const Bolded = () =>
<strong>bolded text</strong>
This is an example of a component _being used in some italic markdown with some <Bolded />,
and as you can see_ once you close the italics, it will break incorrectly when prettier formats it.
```
**Output:**
```mdx
export const Bolded = () => <strong>bolded text</strong>
This is an example of a component _being used in some italic markdown with some
<Bolded />,
and as you can see_ once you close the italics, it will break incorrectly when prettier formats it.
```
**Rendered:**
As you can see, when it adds a completely blank line before the component, it will cause the italics to break and format the markdown / MDX incorrectly.
<img width="667" alt="screen shot 2019-01-20 at 4 25 17 pm" src="https://user-images.githubusercontent.com/446260/51445902-0f7aeb80-1cd0-11e9-9bb7-5216f4b5c720.png">
**Expected behavior:**
The completely blank line should not be there and the paragraph should read correctly and unbroken.
<img width="696" alt="screen shot 2019-01-20 at 4 26 59 pm" src="https://user-images.githubusercontent.com/446260/51445919-3df8c680-1cd0-11e9-9b5b-d49d6a0d81ce.png"> | 1.0 | MDX parser cannot incorrectly parses React components used within markdown syntax in paragraphs - This isn't a regression in `1.16.0`. I was doing this before today's release.
**Prettier 1.16.0**
[Playground link](https://prettier.io/playground/#N4Igxg9gdgLgprEAuEcAeAHCAnGACSKAZ3wCEIAbAEzirwF48AKASgYD4AdKTmAHhLZoAc3YAjSjTrw0-APSCRXHlAAqACwCWRPNrwBDKHnT6Athgpw8EAGYGCEc9AT4A+mLiaowvAFcitLpGRI5WmjD6FJpgeKb62ADWVBAA7kYp4ep4IaZWfOTUgXLsADR43IZ0+joAnhC+BIbZcHCu1lBgVnUNYBQQAXgw6mERUWBEZeF4GRQUeGLYcPoJQZDYi2AwFDXTw0YYizAwmnDYeDY4cTA64QB0ICUgEBjH0ETIoPFCKQAK8QjvFD6ABuEE0VAeIAW+jACTgMAAyhgYV5hMgYNhfHBHuoYKYKAB1LTwIjIzoIgHhTTA8I1ZDgIjvR5eAK4H7YfTCOLIGyRAKPABWRDQpA5sPhCLMcAAMl44Dy+diQEK0AjUZYAIq+CDwBUUfkgZHYVn00xUNCQg5eGAE8FDZAADgADI8Dv04ASORh6Qc4KzgfLHosAI6+TSLdmc7lIXn6pUBUyaPUGojquBanXypAYrGPCJiW1Ue1IABMeY5mii3gAwo5oyAoM5If44Kp9GJAbGAgBfbtAA)
```sh
--parser mdx
--no-semi
--single-quote
```
**Input:**
```mdx
export const Bolded = () =>
<strong>bolded text</strong>
This is an example of a component _being used in some italic markdown with some <Bolded />,
and as you can see_ once you close the italics, it will break incorrectly when prettier formats it.
```
**Output:**
```mdx
export const Bolded = () => <strong>bolded text</strong>
This is an example of a component _being used in some italic markdown with some
<Bolded />,
and as you can see_ once you close the italics, it will break incorrectly when prettier formats it.
```
**Rendered:**
As you can see, when it adds a completely blank line before the component, it will cause the italics to break and format the markdown / MDX incorrectly.
<img width="667" alt="screen shot 2019-01-20 at 4 25 17 pm" src="https://user-images.githubusercontent.com/446260/51445902-0f7aeb80-1cd0-11e9-9bb7-5216f4b5c720.png">
**Expected behavior:**
The completely blank line should not be there and the paragraph should read correctly and unbroken.
<img width="696" alt="screen shot 2019-01-20 at 4 26 59 pm" src="https://user-images.githubusercontent.com/446260/51445919-3df8c680-1cd0-11e9-9b5b-d49d6a0d81ce.png"> | priority | mdx parser cannot incorrectly parses react components used within markdown syntax in paragraphs this isn t a regression in i was doing this before today s release prettier sh parser mdx no semi single quote input mdx export const bolded bolded text this is an example of a component being used in some italic markdown with some and as you can see once you close the italics it will break incorrectly when prettier formats it output mdx export const bolded bolded text this is an example of a component being used in some italic markdown with some and as you can see once you close the italics it will break incorrectly when prettier formats it rendered as you can see when it adds a completely blank line before the component it will cause the italics to break and format the markdown mdx incorrectly img width alt screen shot at pm src expected behavior the completely blank line should not be there and the paragraph should read correctly and unbroken img width alt screen shot at pm src | 1 |
266,437 | 8,367,703,278 | IssuesEvent | 2018-10-04 13:01:19 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Can add break points to every element from source view | Component/Composer Component/Debugger Imported Priority/High Type/Bug | <a href="https://github.com/yasassri"><img src="https://avatars1.githubusercontent.com/u/7681361?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [yasassri](https://github.com/yasassri)**
_Friday Oct 13, 2017 at 05:33 GMT_
_Originally opened as https://github.com/ballerina-lang/composer/issues/3693_
----

| 1.0 | Can add break points to every element from source view - <a href="https://github.com/yasassri"><img src="https://avatars1.githubusercontent.com/u/7681361?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [yasassri](https://github.com/yasassri)**
_Friday Oct 13, 2017 at 05:33 GMT_
_Originally opened as https://github.com/ballerina-lang/composer/issues/3693_
----

| priority | can add break points to every element from source view issue by friday oct at gmt originally opened as | 1 |
461,293 | 13,228,121,144 | IssuesEvent | 2020-08-18 05:21:13 | moonwards1/Moonwards-Virtual-Moon | https://api.github.com/repos/moonwards1/Moonwards-Virtual-Moon | closed | Create lunar terrain material | Department: Graphics/GFX Priority: High Type: Feature | Lalande crater is very young, as lunar craters go. It has a lot of small rocks on its surface.
Something to note - in order to control dust, which is a major hazard on the moon, the surface where the colony is has been heated with concentrated sunlight and microwaves until it all fused together. During that process, a sand mixture was sprinkled on top so that the final surface wasn't slippery. So, this surface is mostly going to be an artistic representation of that.
If it is possible to add tiny craters procedurally, that would aid a lot with realism.
The references below serve well for the areas that remain in their natural state - most visible on either side of the road to the spaceport, for example.
https://www.lpi.usra.edu/resources/apollopanoramas/images/preview/original/JSC2007e045378.jpg
https://quickmap.lroc.asu.edu/layers?extent=-8.3030536,-4.5808771,-8.2280308,-4.5467412&proj=10&layers=NrBsFYBoAZIRnpEoAsjYIHYFcA2vIBvAXwF1Siyk44oNEQ4BmBBenfS8r0oA
https://www.lpi.usra.edu/resources/apollopanoramas/
The Pinterest page has resources that may be useful for both versions of the terrain - the fused version and the natural version. The images there link to the full resource, and includes a number of cc0 images and full PBR materials.
https://www.pinterest.com.mx/holder3884/lunar-terrain/ | 1.0 | Create lunar terrain material - Lalande crater is very young, as lunar craters go. It has a lot of small rocks on its surface.
Something to note - in order to control dust, which is a major hazard on the moon, the surface where the colony is has been heated with concentrated sunlight and microwaves until it all fused together. During that process, a sand mixture was sprinkled on top so that the final surface wasn't slippery. So, this surface is mostly going to be an artistic representation of that.
If it is possible to add tiny craters procedurally, that would aid a lot with realism.
The references below serve well for the areas that remain in their natural state - most visible on either side of the road to the spaceport, for example.
https://www.lpi.usra.edu/resources/apollopanoramas/images/preview/original/JSC2007e045378.jpg
https://quickmap.lroc.asu.edu/layers?extent=-8.3030536,-4.5808771,-8.2280308,-4.5467412&proj=10&layers=NrBsFYBoAZIRnpEoAsjYIHYFcA2vIBvAXwF1Siyk44oNEQ4BmBBenfS8r0oA
https://www.lpi.usra.edu/resources/apollopanoramas/
The Pinterest page has resources that may be useful for both versions of the terrain - the fused version and the natural version. The images there link to the full resource, and includes a number of cc0 images and full PBR materials.
https://www.pinterest.com.mx/holder3884/lunar-terrain/ | priority | create lunar terrain material lalande crater is very young as lunar craters go it has a lot of small rocks on its surface something to note in order to control dust which is a major hazard on the moon the surface where the colony is has been heated with concentrated sunlight and microwaves until it all fused together during that process a sand mixture was sprinkled on top so that the final surface wasn t slippery so this surface is mostly going to be an artistic representation of that if it is possible to add tiny craters procedurally that would aid a lot with realism the references below serve well for the areas that remain in their natural state most visible on either side of the road to the spaceport for example the pinterest page has resources that may be useful for both versions of the terrain the fused version and the natural version the images there link to the full resource and includes a number of images and full pbr materials | 1 |
578,714 | 17,150,521,119 | IssuesEvent | 2021-07-13 19:55:54 | TravelMapping/EduTools | https://api.github.com/repos/TravelMapping/EduTools | closed | HDX: AV Status Panel horizontal size changes | enhancement high priority user interface visual | With #218 fixed, a remaining annoyance with the AV Status Panel during an AV is that the pseudocode table and sometimes other AV-specific items bounce side to side to remain centered when the panel grows because of a long message or other component of the panel. | 1.0 | HDX: AV Status Panel horizontal size changes - With #218 fixed, a remaining annoyance with the AV Status Panel during an AV is that the pseudocode table and sometimes other AV-specific items bounce side to side to remain centered when the panel grows because of a long message or other component of the panel. | priority | hdx av status panel horizontal size changes with fixed a remaining annoyance with the av status panel during an av is that the pseudocode table and sometimes other av specific items bounce side to side to remain centered when the panel grows because of a long message or other component of the panel | 1 |
747,172 | 26,075,868,639 | IssuesEvent | 2022-12-24 14:02:29 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | Return old snapshots to preferences | priority: high bug: pending release notes: pending | **Is your feature request related to a problem? Please describe.**
dt 4.2 introduced a new snapshots module which allows for panning and zooming. This can be very useful in certain scenarios, but comes with the drawback of being slower. For my use case - someone who rarely has use for panning and zooming snapshots, but does take snapshots CONSTANTLY, the drawback here significantly outweighs the gain. It only takes an extra second to load a snapshot, but doing this a hundred times an image adds up to a lot. (Why a hundred? Because using shortcuts, I love to tap the snapshot on and off a lot to compare before/after of each effect. It is both simpler, and easier on the fingers, to do this via shortcut than via constant mouse dragging.)
**Describe the solution you'd like**
Ability to select either 'legacy' or 'modern' snapshots tool in preferences, where 'modern' is the version as implemented in 4.2, and 'legacy' is the version implemented in older releases.
| 1.0 | Return old snapshots to preferences - **Is your feature request related to a problem? Please describe.**
dt 4.2 introduced a new snapshots module which allows for panning and zooming. This can be very useful in certain scenarios, but comes with the drawback of being slower. For my use case - someone who rarely has use for panning and zooming snapshots, but does take snapshots CONSTANTLY, the drawback here significantly outweighs the gain. It only takes an extra second to load a snapshot, but doing this a hundred times an image adds up to a lot. (Why a hundred? Because using shortcuts, I love to tap the snapshot on and off a lot to compare before/after of each effect. It is both simpler, and easier on the fingers, to do this via shortcut than via constant mouse dragging.)
**Describe the solution you'd like**
Ability to select either 'legacy' or 'modern' snapshots tool in preferences, where 'modern' is the version as implemented in 4.2, and 'legacy' is the version implemented in older releases.
| priority | return old snapshots to preferences is your feature request related to a problem please describe dt introduced a new snapshots module which allows for panning and zooming this can be very useful in certain scenarios but comes with the drawback of being slower for my use case someone who rarely has use for panning and zooming snapshots but does take snapshots constantly the drawback here significantly outweighs the gain it only takes an extra second to load a snapshot but doing this a hundred times an image adds up to a lot why a hundred because using shortcuts i love to tap the snapshot on and off a lot to compare before after of each effect it is both simpler and easier on the fingers to do this via shortcut than via constant mouse dragging describe the solution you d like ability to select either legacy or modern snapshots tool in preferences where modern is the version as implemented in and legacy is the version implemented in older releases | 1 |
240,436 | 7,801,694,449 | IssuesEvent | 2018-06-10 01:13:31 | gcorso97/fleamaster | https://api.github.com/repos/gcorso97/fleamaster | closed | Products-bought page | backlog high priority | A page containing an overview of all products bought by the user account. | 1.0 | Products-bought page - A page containing an overview of all products bought by the user account. | priority | products bought page a page containing an overview of all products bought by the user account | 1 |
442,788 | 12,750,255,451 | IssuesEvent | 2020-06-27 03:11:38 | elementary/website | https://api.github.com/repos/elementary/website | reopened | Translation extract scripts do not extract from template files | Priority: High Status: In Progress l10n | When running `php _backend/Console/Translate.php` it should also translate the `_template` files. It currently does not and trying to include those files individually causes things to break.
| 1.0 | Translation extract scripts do not extract from template files - When running `php _backend/Console/Translate.php` it should also translate the `_template` files. It currently does not and trying to include those files individually causes things to break.
| priority | translation extract scripts do not extract from template files when running php backend console translate php it should also translate the template files it currently does not and trying to include those files individually causes things to break | 1 |
712,309 | 24,490,047,269 | IssuesEvent | 2022-10-09 23:30:55 | LiveSplit/livesplit-core | https://api.github.com/repos/LiveSplit/livesplit-core | closed | Panic when formatting fractional parts | bug priority: high | We made the assumption there that the nanoseconds are always less than `1_000_000_000` as that would be a whole second, which should be wrapped around to 0 again. The `time` crate guarantees that this is the case, but apparently it can still somehow happen. | 1.0 | Panic when formatting fractional parts - We made the assumption there that the nanoseconds are always less than `1_000_000_000` as that would be a whole second, which should be wrapped around to 0 again. The `time` crate guarantees that this is the case, but apparently it can still somehow happen. | priority | panic when formatting fractional parts we made the assumption there that the nanoseconds are always less than as that would be a whole second which should be wrapped around to again the time crate guarantees that this is the case but apparently it can still somehow happen | 1 |
564,790 | 16,741,118,988 | IssuesEvent | 2021-06-11 09:54:28 | vaexio/vaex | https://api.github.com/repos/vaexio/vaex | closed | `where` is underperforming when dealing with string columns | priority: high | I am noticing a large performance slowdown when using `df.func.where` on virtual string column.
This only happens for string columns, I have not been able to reproduce such performance slowdown for numeric data.
Here is a reproducible example:
```python
import vaex
import numpy as np
# Create a couple of textual features
abc = 'abcdefghijklmnopqrstuvwxyz'
num = '0123456789'
# Text elements
x_elems = [''.join([i] * 5) for i in abc]
y_elems = [''.join([i] * 5) for i in num]
# Set probabilities - the 1st element should appear ~80% of the time
p_x = np.ones(26) * 0.008
p_x[0] = 0.8
# Create the data
size = 10_000_000
x = np.random.choice(x_elems, size=size, p=p_x)
y = np.random.choice(y_elems, size=size)
# Similar data but now numerical
z_elems = np.arange(9)
p_z = np.ones(9) * 0.025
p_z[0] = 0.8
z = np.random.choice(z_elems, size=size, p=p_z)
w = np.random.choice(np.arange(100), size=size)
# Create DataFrame
df = vaex.from_arrays(x=x, y=y, z=z, w=w)
# Add a virtual columns
df['str_feat'] = df.x + df.y
# Use where to adjust the feature 'feat' such that every time 'aaaaa' apears in df.x it should be the same in df.feat
df['str_feat_adj'] = df.func.where(df.x == 'aaaaa', 'aaaaa', df.str_feat)
# Do value counts of the new feature "str_feat" ---> takes <2 seconds
df.str_feat.value_counts(progress=True)
# Do value counts of the adjusted feature "str_feat_adj" ---> takes ~40 seconds.
df.str_feat_adj.value_counts(progress=True)
# Add a numerical virtual columns
df['num_feat'] = df.z + df.w
df['num_feat_adj'] = df.func.where(df.z == 0, 0, df.num_feat)
# Do value counts of the new feature "num_feat" ---> takes <1 second
df.num_feat.value_counts(progress=True)
# Do value counts of the adjusted feature "num_feat_adj" ---> takes <1 second
df.num_feat_adj.value_counts(progress=True)
``` | 1.0 | `where` is underperforming when dealing with string columns - I am noticing a large performance slowdown when using `df.func.where` on virtual string column.
This only happens for string columns, I have not been able to reproduce such performance slowdown for numeric data.
Here is a reproducible example:
```python
import vaex
import numpy as np
# Create a couple of textual features
abc = 'abcdefghijklmnopqrstuvwxyz'
num = '0123456789'
# Text elements
x_elems = [''.join([i] * 5) for i in abc]
y_elems = [''.join([i] * 5) for i in num]
# Set probabilities - the 1st element should appear ~80% of the time
p_x = np.ones(26) * 0.008
p_x[0] = 0.8
# Create the data
size = 10_000_000
x = np.random.choice(x_elems, size=size, p=p_x)
y = np.random.choice(y_elems, size=size)
# Similar data but now numerical
z_elems = np.arange(9)
p_z = np.ones(9) * 0.025
p_z[0] = 0.8
z = np.random.choice(z_elems, size=size, p=p_z)
w = np.random.choice(np.arange(100), size=size)
# Create DataFrame
df = vaex.from_arrays(x=x, y=y, z=z, w=w)
# Add a virtual columns
df['str_feat'] = df.x + df.y
# Use where to adjust the feature 'feat' such that every time 'aaaaa' apears in df.x it should be the same in df.feat
df['str_feat_adj'] = df.func.where(df.x == 'aaaaa', 'aaaaa', df.str_feat)
# Do value counts of the new feature "str_feat" ---> takes <2 seconds
df.str_feat.value_counts(progress=True)
# Do value counts of the adjusted feature "str_feat_adj" ---> takes ~40 seconds.
df.str_feat_adj.value_counts(progress=True)
# Add a numerical virtual columns
df['num_feat'] = df.z + df.w
df['num_feat_adj'] = df.func.where(df.z == 0, 0, df.num_feat)
# Do value counts of the new feature "num_feat" ---> takes <1 second
df.num_feat.value_counts(progress=True)
# Do value counts of the adjusted feature "num_feat_adj" ---> takes <1 second
df.num_feat_adj.value_counts(progress=True)
``` | priority | where is underperforming when dealing with string columns i am noticing a large performance slowdown when using df func where on virtual string column this only happens for string columns i have not been able to reproduce such performance slowdown for numeric data here is a reproducible example python import vaex import numpy as np create a couple of textual features abc abcdefghijklmnopqrstuvwxyz num text elements x elems for i in abc y elems for i in num set probabilities the element should appear of the time p x np ones p x create the data size x np random choice x elems size size p p x y np random choice y elems size size similar data but now numerical z elems np arange p z np ones p z z np random choice z elems size size p p z w np random choice np arange size size create dataframe df vaex from arrays x x y y z z w w add a virtual columns df df x df y use where to adjust the feature feat such that every time aaaaa apears in df x it should be the same in df feat df df func where df x aaaaa aaaaa df str feat do value counts of the new feature str feat takes seconds df str feat value counts progress true do value counts of the adjusted feature str feat adj takes seconds df str feat adj value counts progress true add a numerical virtual columns df df z df w df df func where df z df num feat do value counts of the new feature num feat takes second df num feat value counts progress true do value counts of the adjusted feature num feat adj takes second df num feat adj value counts progress true | 1 |
266,469 | 8,368,012,384 | IssuesEvent | 2018-10-04 13:47:41 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Cannot import a swagger definition | Component/Composer Priority/High Type/Bug | **Steps to reproduce:**
1. Download the swagger definition from Swagger Pet store
http://petstore.swagger.io/v2/swagger.json
2. Import it

**Affected Versions:**
Tools - beta 15
**OS, DB, other environment details and versions:**
Firefox Browser - 59.0.2 (64-bit)
| 1.0 | Cannot import a swagger definition - **Steps to reproduce:**
1. Download the swagger definition from Swagger Pet store
http://petstore.swagger.io/v2/swagger.json
2. Import it

**Affected Versions:**
Tools - beta 15
**OS, DB, other environment details and versions:**
Firefox Browser - 59.0.2 (64-bit)
| priority | cannot import a swagger definition steps to reproduce download the swagger definition from swagger pet store import it affected versions tools beta os db other environment details and versions firefox browser bit | 1 |
227,896 | 7,543,959,539 | IssuesEvent | 2018-04-17 16:57:24 | GingerWalnut/SQ5.0Public | https://api.github.com/repos/GingerWalnut/SQ5.0Public | closed | Canora exiting issue | Priority High Ships Bug | I was flying out of Canora and I managed to get into space, but when I moved, I was teleported back to the 3rd quartile of the planet. When I flew back out, I was in the same place in space as before, and so on until I tried flying a couple thousand blocks East on Canora, then managed to successfully fly way from Canora. I had no error messages, seems like the planet TP boxes are a bit strangely configured. | 1.0 | Canora exiting issue - I was flying out of Canora and I managed to get into space, but when I moved, I was teleported back to the 3rd quartile of the planet. When I flew back out, I was in the same place in space as before, and so on until I tried flying a couple thousand blocks East on Canora, then managed to successfully fly way from Canora. I had no error messages, seems like the planet TP boxes are a bit strangely configured. | priority | canora exiting issue i was flying out of canora and i managed to get into space but when i moved i was teleported back to the quartile of the planet when i flew back out i was in the same place in space as before and so on until i tried flying a couple thousand blocks east on canora then managed to successfully fly way from canora i had no error messages seems like the planet tp boxes are a bit strangely configured | 1 |
375,246 | 11,101,935,920 | IssuesEvent | 2019-12-16 22:33:10 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | opened | Dashboard localization | Docs: not needed Effort: small Feature Module: Dashboard Priority: high | ## Is your feature request related to a problem? Please describe.
Need to add localized strings to localization files for the mobile dashboard, with translations.
## Describe the solution you'd like
At a minimum, english and french localized strings for all added strings that are user facing for the mobile dashboard.
## Implementation
Track all issues/prs/code which add user facing strings in this PR and complete all the localization toward the end of development
## Describe alternatives you've considered
N/A
## Additional context
N/A
| 1.0 | Dashboard localization - ## Is your feature request related to a problem? Please describe.
Need to add localized strings to localization files for the mobile dashboard, with translations.
## Describe the solution you'd like
At a minimum, english and french localized strings for all added strings that are user facing for the mobile dashboard.
## Implementation
Track all issues/prs/code which add user facing strings in this PR and complete all the localization toward the end of development
## Describe alternatives you've considered
N/A
## Additional context
N/A
| priority | dashboard localization is your feature request related to a problem please describe need to add localized strings to localization files for the mobile dashboard with translations describe the solution you d like at a minimum english and french localized strings for all added strings that are user facing for the mobile dashboard implementation track all issues prs code which add user facing strings in this pr and complete all the localization toward the end of development describe alternatives you ve considered n a additional context n a | 1 |
269,461 | 8,435,886,587 | IssuesEvent | 2018-10-17 14:12:07 | smartdevicelink/sdl_core | https://api.github.com/repos/smartdevicelink/sdl_core | closed | Add getter and setter for path in qdb sql database | Bug Contributor priority 1: High | ### Bug Report
Add getter and setter for path in qdb sql database
Steps
1. The path to the file: src/components/utils/include/utils/qdb_wrapper/sql_database.h
2. Needed to add getter and setter for path in qdb sql database
##### Expected Behavior
Getter and setter for path in qdb sql database is present
##### Observed Behavior
No getter and setter for path in qdb sql database
##### OS & Version Information
* OS/Version:
* SDL Core Version:
* Testing Against: | 1.0 | Add getter and setter for path in qdb sql database - ### Bug Report
Add getter and setter for path in qdb sql database
Steps
1. The path to the file: src/components/utils/include/utils/qdb_wrapper/sql_database.h
2. Needed to add getter and setter for path in qdb sql database
##### Expected Behavior
Getter and setter for path in qdb sql database is present
##### Observed Behavior
No getter and setter for path in qdb sql database
##### OS & Version Information
* OS/Version:
* SDL Core Version:
* Testing Against: | priority | add getter and setter for path in qdb sql database bug report add getter and setter for path in qdb sql database steps the path to the file src components utils include utils qdb wrapper sql database h needed to add getter and setter for path in qdb sql database expected behavior getter and setter for path in qdb sql database is present observed behavior no getter and setter for path in qdb sql database os version information os version sdl core version testing against | 1 |
770,676 | 27,050,370,382 | IssuesEvent | 2023-02-13 12:50:43 | dvrpc/TrackingProgress | https://api.github.com/repos/dvrpc/TrackingProgress | opened | GDP chart1(a,b,c) | high priority | @hachadorian This is the first of three charts for Gross Domestic Product. The CSVs for these are formatted the way you'll be getting them going forward. There are some known data issues that will soon be fixed, and we'll need to swap out the CSVs, but I wanted to get you this now so you can begin building the page.
The first chart is the most complex, because it has two dropdown menus. I'll start with a visual mockups of what the chart will look like for each of the second dropdown selections, then some specifications below.
### Chart 1a - Where Select Value Type = Annual Change

**Chart Type:** line
**Data File:** gdp_chart1a.csv
**Chart Title:** GDP Growth
**Drop-down 1 Label:** Select Industry:
**Drop-down 1 List (to be formatted as pictured) and CSV header endings:**
Dropdown List | CSV Header Endings
-- | --
All Industries | total
Goods | goods
Agriculture, Forestry, Fishing, and Mining | ag_for_fish_min
Construction | construction
Manufacturing | manufacturing
Durable Goods | manu_durable
Nondurable Goods | manu_nondurable
Services | services
Arts, Entertainment, Recreation, Accommodation, and Food Services | art_ent_rec_acc_food
Educational Services | ed_services
Finance, Insurance, Real Estate | fin_insur_real_est
Health Care and Social Assistance | health_social_assist
Information | information
Other Services (except Government) | other_services
Professional Services | prof_services
Retail Trade | retail_trade
Transportation, Warehousing, and Utilities | transp_warehouse_util
Wholesale Trade | wholesale_trade
Government and government enterprises | gov
**Legend Items and CSV header beginnings:**
Legend Label | CSV Header Beginnings
-- | --
DVRPC Region* | dvrpc_
NJ Counties* | njcos_
PA Suburban Counties* | pasubcos_
Bucks** | bucks_
Burlington** | burl_
Camden** | camd_
Chester** | ches_
Delaware** | del_
Gloucester** | glo_
Mercer** | mer_
Montgomery** | mont_
Philadelphia** | phil_
**Drop-down 2 Label:** Select Value Type:
**Drop-down 2 List:** Annual Change
**Y Axis Label:** Growth Rate
**Y Axis # format:** $x,xxx (note, not as pictured)
**Mouse-over display:** $x,xxx
**X axis label:** Year
**Legend Note:** Geography: * Regional, ** County
**Singular/ Plural Source:** Source:
**Source line:** U.S. Census Bureau's Business Formation Statistics
### Chart 1b - Where Select Value Type = Change Since Base Year

**Chart Type:** line
**Data File:** gdp_chart1b.csv
**Chart Title:** GDP Growth
**Drop-down 1 Label:** Select Industry:
**Drop-down 1 List (to be formatted as pictured) and CSV header endings:**
Dropdown List | CSV Header Endings
-- | --
All Industries | total
Goods | goods
Agriculture, Forestry, Fishing, and Mining | ag_for_fish_min
Construction | construction
Manufacturing | manufacturing
Durable Goods | manu_durable
Nondurable Goods | manu_nondurable
Services | services
Arts, Entertainment, Recreation, Accommodation, and Food Services | art_ent_rec_acc_food
Educational Services | ed_services
Finance, Insurance, Real Estate | fin_insur_real_est
Health Care and Social Assistance | health_social_assist
Information | information
Other Services (except Government) | other_services
Professional Services | prof_services
Retail Trade | retail_trade
Transportation, Warehousing, and Utilities | transp_warehouse_util
Wholesale Trade | wholesale_trade
Government and government enterprises | gov
**Legend Items and CSV header beginnings:**
Legend Label | CSV Header Beginnings
-- | --
DVRPC Region* | dvrpc_
NJ Counties* | njcos_
PA Suburban Counties* | pasubcos_
Bucks** | bucks_
Burlington** | burl_
Camden** | camd_
Chester** | ches_
Delaware** | del_
Gloucester** | glo_
Mercer** | mer_
Montgomery** | mont_
Philadelphia** | phil_
**Drop-down 2 Label:** Select Value Type:
**Drop-down 2 List:** Change Since Base Year
**Y Axis Label:** Growth Rate
**Y Axis # format:** $x,xxx (note, not as pictured)
**Mouse-over display:** $x,xxx
**X axis label:** Year
**Legend Note:** Geography: * Regional, ** County
**Singular/ Plural Source:** Source:
**Source line:** U.S. Census Bureau's Business Formation Statistics
### Chart 1c - Where Select Value Type = Total GDP

**Chart Type:** line
**Data File:** gdp_chart1c.csv
**Chart Title:** GDP Growth
**Drop-down 1 Label:** Select Industry:
**Drop-down 1 List (to be formatted as pictured) and CSV header endings:**
Dropdown List | CSV Header Endings
-- | --
All Industries | total
Goods | goods
Agriculture, Forestry, Fishing, and Mining | ag_for_fish_min
Construction | construction
Manufacturing | manufacturing
Durable Goods | manu_durable
Nondurable Goods | manu_nondurable
Services | services
Arts, Entertainment, Recreation, Accommodation, and Food Services | art_ent_rec_acc_food
Educational Services | ed_services
Finance, Insurance, Real Estate | fin_insur_real_est
Health Care and Social Assistance | health_social_assist
Information | information
Other Services (except Government) | other_services
Professional Services | prof_services
Retail Trade | retail_trade
Transportation, Warehousing, and Utilities | transp_warehouse_util
Wholesale Trade | wholesale_trade
Government and government enterprises | gov
**Legend Items and CSV header beginnings:**
Legend Label | CSV Header Beginnings
-- | --
DVRPC Region* | dvrpc_
NJ Counties* | njcos_
PA Suburban Counties* | pasubcos_
Bucks** | bucks_
Burlington** | burl_
Camden** | camd_
Chester** | ches_
Delaware** | del_
Gloucester** | glo_
Mercer** | mer_
Montgomery** | mont_
Philadelphia** | phil_
**Drop-down 2 Label:** Select Value Type:
**Drop-down 2 List:** Annual Change
**Y Axis Label:** Total GDP in Millions of Dollars (2021)
**Y Axis # format:** $x,xxx (note, not as pictured)
**Mouse-over display:** $x,xxx
**X axis label:** Year
**Legend Note:** Geography: * Regional, ** County
**Singular/ Plural Source:** Source:
**Source line:** U.S. Census Bureau's Business Formation Statistics
| 1.0 | GDP chart1(a,b,c) - @hachadorian This is the first of three charts for Gross Domestic Product. The CSVs for these are formatted the way you'll be getting them going forward. There are some known data issues that will soon be fixed, and we'll need to swap out the CSVs, but I wanted to get you this now so you can begin building the page.
The first chart is the most complex, because it has two dropdown menus. I'll start with a visual mockups of what the chart will look like for each of the second dropdown selections, then some specifications below.
### Chart 1a - Where Select Value Type = Annual Change

**Chart Type:** line
**Data File:** gdp_chart1a.csv
**Chart Title:** GDP Growth
**Drop-down 1 Label:** Select Industry:
**Drop-down 1 List (to be formatted as pictured) and CSV header endings:**
Dropdown List | CSV Header Endings
-- | --
All Industries | total
Goods | goods
Agriculture, Forestry, Fishing, and Mining | ag_for_fish_min
Construction | construction
Manufacturing | manufacturing
Durable Goods | manu_durable
Nondurable Goods | manu_nondurable
Services | services
Arts, Entertainment, Recreation, Accommodation, and Food Services | art_ent_rec_acc_food
Educational Services | ed_services
Finance, Insurance, Real Estate | fin_insur_real_est
Health Care and Social Assistance | health_social_assist
Information | information
Other Services (except Government) | other_services
Professional Services | prof_services
Retail Trade | retail_trade
Transportation, Warehousing, and Utilities | transp_warehouse_util
Wholesale Trade | wholesale_trade
Government and government enterprises | gov
**Legend Items and CSV header beginnings:**
Legend Label | CSV Header Beginnings
-- | --
DVRPC Region* | dvrpc_
NJ Counties* | njcos_
PA Suburban Counties* | pasubcos_
Bucks** | bucks_
Burlington** | burl_
Camden** | camd_
Chester** | ches_
Delaware** | del_
Gloucester** | glo_
Mercer** | mer_
Montgomery** | mont_
Philadelphia** | phil_
**Drop-down 2 Label:** Select Value Type:
**Drop-down 2 List:** Annual Change
**Y Axis Label:** Growth Rate
**Y Axis # format:** $x,xxx (note, not as pictured)
**Mouse-over display:** $x,xxx
**X axis label:** Year
**Legend Note:** Geography: * Regional, ** County
**Singular/ Plural Source:** Source:
**Source line:** U.S. Census Bureau's Business Formation Statistics
### Chart 1b - Where Select Value Type = Change Since Base Year

**Chart Type:** line
**Data File:** gdp_chart1b.csv
**Chart Title:** GDP Growth
**Drop-down 1 Label:** Select Industry:
**Drop-down 1 List (to be formatted as pictured) and CSV header endings:**
Dropdown List | CSV Header Endings
-- | --
All Industries | total
Goods | goods
Agriculture, Forestry, Fishing, and Mining | ag_for_fish_min
Construction | construction
Manufacturing | manufacturing
Durable Goods | manu_durable
Nondurable Goods | manu_nondurable
Services | services
Arts, Entertainment, Recreation, Accommodation, and Food Services | art_ent_rec_acc_food
Educational Services | ed_services
Finance, Insurance, Real Estate | fin_insur_real_est
Health Care and Social Assistance | health_social_assist
Information | information
Other Services (except Government) | other_services
Professional Services | prof_services
Retail Trade | retail_trade
Transportation, Warehousing, and Utilities | transp_warehouse_util
Wholesale Trade | wholesale_trade
Government and government enterprises | gov
**Legend Items and CSV header beginnings:**
Legend Label | CSV Header Beginnings
-- | --
DVRPC Region* | dvrpc_
NJ Counties* | njcos_
PA Suburban Counties* | pasubcos_
Bucks** | bucks_
Burlington** | burl_
Camden** | camd_
Chester** | ches_
Delaware** | del_
Gloucester** | glo_
Mercer** | mer_
Montgomery** | mont_
Philadelphia** | phil_
**Drop-down 2 Label:** Select Value Type:
**Drop-down 2 List:** Change Since Base Year
**Y Axis Label:** Growth Rate
**Y Axis # format:** $x,xxx (note, not as pictured)
**Mouse-over display:** $x,xxx
**X axis label:** Year
**Legend Note:** Geography: * Regional, ** County
**Singular/ Plural Source:** Source:
**Source line:** U.S. Census Bureau's Business Formation Statistics
### Chart 1c - Where Select Value Type = Total GDP

**Chart Type:** line
**Data File:** gdp_chart1c.csv
**Chart Title:** GDP Growth
**Drop-down 1 Label:** Select Industry:
**Drop-down 1 List (to be formatted as pictured) and CSV header endings:**
Dropdown List | CSV Header Endings
-- | --
All Industries | total
Goods | goods
Agriculture, Forestry, Fishing, and Mining | ag_for_fish_min
Construction | construction
Manufacturing | manufacturing
Durable Goods | manu_durable
Nondurable Goods | manu_nondurable
Services | services
Arts, Entertainment, Recreation, Accommodation, and Food Services | art_ent_rec_acc_food
Educational Services | ed_services
Finance, Insurance, Real Estate | fin_insur_real_est
Health Care and Social Assistance | health_social_assist
Information | information
Other Services (except Government) | other_services
Professional Services | prof_services
Retail Trade | retail_trade
Transportation, Warehousing, and Utilities | transp_warehouse_util
Wholesale Trade | wholesale_trade
Government and government enterprises | gov
**Legend Items and CSV header beginnings:**
Legend Label | CSV Header Beginnings
-- | --
DVRPC Region* | dvrpc_
NJ Counties* | njcos_
PA Suburban Counties* | pasubcos_
Bucks** | bucks_
Burlington** | burl_
Camden** | camd_
Chester** | ches_
Delaware** | del_
Gloucester** | glo_
Mercer** | mer_
Montgomery** | mont_
Philadelphia** | phil_
**Drop-down 2 Label:** Select Value Type:
**Drop-down 2 List:** Annual Change
**Y Axis Label:** Total GDP in Millions of Dollars (2021)
**Y Axis # format:** $x,xxx (note, not as pictured)
**Mouse-over display:** $x,xxx
**X axis label:** Year
**Legend Note:** Geography: * Regional, ** County
**Singular/ Plural Source:** Source:
**Source line:** U.S. Census Bureau's Business Formation Statistics
| priority | gdp a b c hachadorian this is the first of three charts for gross domestic product the csvs for these are formatted the way you ll be getting them going forward there are some known data issues that will soon be fixed and we ll need to swap out the csvs but i wanted to get you this now so you can begin building the page the first chart is the most complex because it has two dropdown menus i ll start with a visual mockups of what the chart will look like for each of the second dropdown selections then some specifications below chart where select value type annual change chart type line data file gdp csv chart title gdp growth drop down label select industry drop down list to be formatted as pictured and csv header endings dropdown list csv header endings all industries total goods goods agriculture forestry fishing and mining ag for fish min construction construction manufacturing manufacturing durable goods manu durable nondurable goods manu nondurable services services arts entertainment recreation accommodation and food services art ent rec acc food educational services ed services finance insurance real estate fin insur real est health care and social assistance health social assist information information other services except government other services professional services prof services retail trade retail trade transportation warehousing and utilities transp warehouse util wholesale trade wholesale trade government and government enterprises gov legend items and csv header beginnings legend label csv header beginnings dvrpc region dvrpc nj counties njcos pa suburban counties pasubcos bucks bucks burlington burl camden camd chester ches delaware del gloucester glo mercer mer montgomery mont philadelphia phil drop down label select value type drop down list annual change y axis label growth rate y axis format x xxx note not as pictured mouse over display x xxx x axis label year legend note geography regional county singular plural source source source line u s census bureau s business formation statistics chart where select value type change since base year chart type line data file gdp csv chart title gdp growth drop down label select industry drop down list to be formatted as pictured and csv header endings dropdown list csv header endings all industries total goods goods agriculture forestry fishing and mining ag for fish min construction construction manufacturing manufacturing durable goods manu durable nondurable goods manu nondurable services services arts entertainment recreation accommodation and food services art ent rec acc food educational services ed services finance insurance real estate fin insur real est health care and social assistance health social assist information information other services except government other services professional services prof services retail trade retail trade transportation warehousing and utilities transp warehouse util wholesale trade wholesale trade government and government enterprises gov legend items and csv header beginnings legend label csv header beginnings dvrpc region dvrpc nj counties njcos pa suburban counties pasubcos bucks bucks burlington burl camden camd chester ches delaware del gloucester glo mercer mer montgomery mont philadelphia phil drop down label select value type drop down list change since base year y axis label growth rate y axis format x xxx note not as pictured mouse over display x xxx x axis label year legend note geography regional county singular plural source source source line u s census bureau s business formation statistics chart where select value type total gdp chart type line data file gdp csv chart title gdp growth drop down label select industry drop down list to be formatted as pictured and csv header endings dropdown list csv header endings all industries total goods goods agriculture forestry fishing and mining ag for fish min construction construction manufacturing manufacturing durable goods manu durable nondurable goods manu nondurable services services arts entertainment recreation accommodation and food services art ent rec acc food educational services ed services finance insurance real estate fin insur real est health care and social assistance health social assist information information other services except government other services professional services prof services retail trade retail trade transportation warehousing and utilities transp warehouse util wholesale trade wholesale trade government and government enterprises gov legend items and csv header beginnings legend label csv header beginnings dvrpc region dvrpc nj counties njcos pa suburban counties pasubcos bucks bucks burlington burl camden camd chester ches delaware del gloucester glo mercer mer montgomery mont philadelphia phil drop down label select value type drop down list annual change y axis label total gdp in millions of dollars y axis format x xxx note not as pictured mouse over display x xxx x axis label year legend note geography regional county singular plural source source source line u s census bureau s business formation statistics | 1 |
567,766 | 16,891,705,085 | IssuesEvent | 2021-06-23 10:01:35 | oceanprotocol/provider | https://api.github.com/repos/oceanprotocol/provider | closed | Random timeouts | Priority: High Type: Bug | While testing provider on rinkeby i often get timeout for like 1 minute and this it's back on. There are multiple errors in the logs, not sure if related. During this time the pods were up and running.
```
2021-06-15 13:11:27,711 - ocean_provider.myapp - ERROR - Exception on /api/v1/services/fileinfo [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/dist-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.8/dist-packages/flask_sieve/validator.py", line 81, in wrapper
return fn(*args, **kwargs)
File "/ocean-provider/ocean_provider/routes/consume.py", line 177, in fileinfo
url_list = get_asset_download_urls(
File "/ocean-provider/ocean_provider/utils/util.py", line 171, in get_asset_download_urls
return [get_download_url(url, config_file) for url in get_asset_urls(asset, wallet)]
TypeError: 'NoneType' object is not iterable
```
Consistently getting this on polygon as well

| 1.0 | Random timeouts - While testing provider on rinkeby i often get timeout for like 1 minute and this it's back on. There are multiple errors in the logs, not sure if related. During this time the pods were up and running.
```
2021-06-15 13:11:27,711 - ocean_provider.myapp - ERROR - Exception on /api/v1/services/fileinfo [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/dist-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.8/dist-packages/flask_sieve/validator.py", line 81, in wrapper
return fn(*args, **kwargs)
File "/ocean-provider/ocean_provider/routes/consume.py", line 177, in fileinfo
url_list = get_asset_download_urls(
File "/ocean-provider/ocean_provider/utils/util.py", line 171, in get_asset_download_urls
return [get_download_url(url, config_file) for url in get_asset_urls(asset, wallet)]
TypeError: 'NoneType' object is not iterable
```
Consistently getting this on polygon as well

| priority | random timeouts while testing provider on rinkeby i often get timeout for like minute and this it s back on there are multiple errors in the logs not sure if related during this time the pods were up and running ocean provider myapp error exception on api services fileinfo traceback most recent call last file usr local lib dist packages flask app py line in wsgi app response self full dispatch request file usr local lib dist packages flask app py line in full dispatch request rv self handle user exception e file usr local lib dist packages flask cors extension py line in wrapped function return cors after request app make response f args kwargs file usr local lib dist packages flask app py line in handle user exception reraise exc type exc value tb file usr local lib dist packages flask compat py line in reraise raise value file usr local lib dist packages flask app py line in full dispatch request rv self dispatch request file usr local lib dist packages flask app py line in dispatch request return self view functions req view args file usr local lib dist packages flask sieve validator py line in wrapper return fn args kwargs file ocean provider ocean provider routes consume py line in fileinfo url list get asset download urls file ocean provider ocean provider utils util py line in get asset download urls return typeerror nonetype object is not iterable consistently getting this on polygon as well | 1 |
632,718 | 20,205,147,825 | IssuesEvent | 2022-02-11 19:25:13 | zebscripts/AFK-Daily | https://api.github.com/repos/zebscripts/AFK-Daily | closed | [BUG] Exec breaks when using multiple arguments | Type: Bug :beetle: Status: In progress :clock1030: File: deploy.sh :file_folder: Priority: High :fire: | When I have an update, the script break because I'm using multiple arguments 😢
The line at fault is the new `exec` one:
https://github.com/zebscripts/AFK-Daily/blob/6a154dd02ab8e58ed00fed89bd52858d31c04f64/deploy.sh#L380
It's converting the arguments in a single one and `getopts` can't do anything with it, so it's returning `Invalid option` or `illegal option`
Here is an example of the solution: <https://github.com/kevingrillet/ShellUtils/blob/main/experiments/exp_exec.sh>
Just remove the quotes, and it should work 😄 | 1.0 | [BUG] Exec breaks when using multiple arguments - When I have an update, the script break because I'm using multiple arguments 😢
The line at fault is the new `exec` one:
https://github.com/zebscripts/AFK-Daily/blob/6a154dd02ab8e58ed00fed89bd52858d31c04f64/deploy.sh#L380
It's converting the arguments in a single one and `getopts` can't do anything with it, so it's returning `Invalid option` or `illegal option`
Here is an example of the solution: <https://github.com/kevingrillet/ShellUtils/blob/main/experiments/exp_exec.sh>
Just remove the quotes, and it should work 😄 | priority | exec breaks when using multiple arguments when i have an update the script break because i m using multiple arguments 😢 the line at fault is the new exec one it s converting the arguments in a single one and getopts can t do anything with it so it s returning invalid option or illegal option here is an example of the solution just remove the quotes and it should work 😄 | 1 |
696,717 | 23,912,669,252 | IssuesEvent | 2022-09-09 09:40:07 | oursky/django-material-demo | https://api.github.com/repos/oursky/django-material-demo | closed | Data validation in edit page | priority/high | - error message display
- validation per field
- validation across the form | 1.0 | Data validation in edit page - - error message display
- validation per field
- validation across the form | priority | data validation in edit page error message display validation per field validation across the form | 1 |
368,243 | 10,868,716,659 | IssuesEvent | 2019-11-15 04:57:37 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Service generation with inline path parameter objects isn't getting generated. | Area/Tooling Component/ToolSwagger Points/2 Priority/High Type/Bug | **Description:**
Service generation for a Yaml file with inline objects for Path parameters will generate the service with template errors.
**Affected Versions:**
version 1.0.0
| 1.0 | Service generation with inline path parameter objects isn't getting generated. - **Description:**
Service generation for a Yaml file with inline objects for Path parameters will generate the service with template errors.
**Affected Versions:**
version 1.0.0
| priority | service generation with inline path parameter objects isn t getting generated description service generation for a yaml file with inline objects for path parameters will generate the service with template errors affected versions version | 1 |
539,395 | 15,787,927,858 | IssuesEvent | 2021-04-01 19:59:34 | zenitheesc/zenith-website | https://api.github.com/repos/zenitheesc/zenith-website | closed | Inserir os CubeSats renderizados na página de Projetos | high-priority | Os CubeSats devem ser colocados no hero principal da página de projetos e posicionados em seus respectivos projetos na seção de CubeSats.
## Detalhes
Um dos CubeSats ainda não foi renderizado pelo @jorgemrisco, porque ele não conseguiu abrir o arquivo. | 1.0 | Inserir os CubeSats renderizados na página de Projetos - Os CubeSats devem ser colocados no hero principal da página de projetos e posicionados em seus respectivos projetos na seção de CubeSats.
## Detalhes
Um dos CubeSats ainda não foi renderizado pelo @jorgemrisco, porque ele não conseguiu abrir o arquivo. | priority | inserir os cubesats renderizados na página de projetos os cubesats devem ser colocados no hero principal da página de projetos e posicionados em seus respectivos projetos na seção de cubesats detalhes um dos cubesats ainda não foi renderizado pelo jorgemrisco porque ele não conseguiu abrir o arquivo | 1 |
153,040 | 5,873,688,589 | IssuesEvent | 2017-05-15 14:32:28 | ncProjectRoot/nc-crm | https://api.github.com/repos/ncProjectRoot/nc-crm | closed | FIX: REST API | priority-high wontfix | Змінити формат uri запитів в рест контроллерах відповідно до статтей обговорених в каналі controller
відповідно переписати код на клієнті
протестувати всі зміни | 1.0 | FIX: REST API - Змінити формат uri запитів в рест контроллерах відповідно до статтей обговорених в каналі controller
відповідно переписати код на клієнті
протестувати всі зміни | priority | fix rest api змінити формат uri запитів в рест контроллерах відповідно до статтей обговорених в каналі controller відповідно переписати код на клієнті протестувати всі зміни | 1 |
799,630 | 28,310,518,789 | IssuesEvent | 2023-04-10 15:00:55 | bounswe/bounswe2023group5 | https://api.github.com/repos/bounswe/bounswe2023group5 | closed | Adding Contributions of Ali Başaran | Priority: High Type: Discussion Status: In Progress | ### Description
Everyone is responsible for adding a Individual Contribution Report to the Milestone report. I will add the summary of my contributions to my personal wiki page in order to help me while creating my report.
### 👮♀️ Reviewer
None
### ⏰ Deadline
10.04.2023 | 1.0 | Adding Contributions of Ali Başaran - ### Description
Everyone is responsible for adding a Individual Contribution Report to the Milestone report. I will add the summary of my contributions to my personal wiki page in order to help me while creating my report.
### 👮♀️ Reviewer
None
### ⏰ Deadline
10.04.2023 | priority | adding contributions of ali başaran description everyone is responsible for adding a individual contribution report to the milestone report i will add the summary of my contributions to my personal wiki page in order to help me while creating my report 👮♀️ reviewer none ⏰ deadline | 1 |
802,472 | 28,963,661,925 | IssuesEvent | 2023-05-10 06:05:28 | EnMAP-Box/enmap-box | https://api.github.com/repos/EnMAP-Box/enmap-box | opened | [Raster Band Stacking] add option for preventing duplicated output band names | feature request priority: high | _Requested by @d-pflugmacher._
When stacking bands, duplicated band names can occure, e.g.

It is proposed to have an option "Enumerate duplicates" (turned on by default), resulting in altered band names, e.g.

| 1.0 | [Raster Band Stacking] add option for preventing duplicated output band names - _Requested by @d-pflugmacher._
When stacking bands, duplicated band names can occure, e.g.

It is proposed to have an option "Enumerate duplicates" (turned on by default), resulting in altered band names, e.g.

| priority | add option for preventing duplicated output band names requested by d pflugmacher when stacking bands duplicated band names can occure e g it is proposed to have an option enumerate duplicates turned on by default resulting in altered band names e g | 1 |
650,099 | 21,334,603,386 | IssuesEvent | 2022-04-18 13:08:36 | merico-dev/lake | https://api.github.com/repos/merico-dev/lake | closed | config-ui: jira issue type mappings and settings disabled, missing rest api proxy | type/bug priority/high | ## Config-UI / Data Integrations / JIRA
The **JIRA** Settings are not accessble due to missing endpoint for JIRA's REST API Proxy which was previously provided by backend but appears to be missing from main branch, which will make JIRA settings unusable.
## Screenshots
<img width="1336" alt="Screen Shot 2022-04-12 at 12 06 02 PM" src="https://user-images.githubusercontent.com/1742233/163006618-5fb550ac-9c80-4af5-8a34-84f896e216ae.png">
<img width="746" alt="Screen Shot 2022-04-12 at 12 05 21 PM" src="https://user-images.githubusercontent.com/1742233/163006622-397f220b-99d2-47f4-af98-31e40b5335e8.png">
<img width="773" alt="Screen Shot 2022-04-12 at 12 07 07 PM" src="https://user-images.githubusercontent.com/1742233/163006614-4c92db09-e7a4-4a32-8f59-c9b29fa9c395.png">
| 1.0 | config-ui: jira issue type mappings and settings disabled, missing rest api proxy - ## Config-UI / Data Integrations / JIRA
The **JIRA** Settings are not accessble due to missing endpoint for JIRA's REST API Proxy which was previously provided by backend but appears to be missing from main branch, which will make JIRA settings unusable.
## Screenshots
<img width="1336" alt="Screen Shot 2022-04-12 at 12 06 02 PM" src="https://user-images.githubusercontent.com/1742233/163006618-5fb550ac-9c80-4af5-8a34-84f896e216ae.png">
<img width="746" alt="Screen Shot 2022-04-12 at 12 05 21 PM" src="https://user-images.githubusercontent.com/1742233/163006622-397f220b-99d2-47f4-af98-31e40b5335e8.png">
<img width="773" alt="Screen Shot 2022-04-12 at 12 07 07 PM" src="https://user-images.githubusercontent.com/1742233/163006614-4c92db09-e7a4-4a32-8f59-c9b29fa9c395.png">
| priority | config ui jira issue type mappings and settings disabled missing rest api proxy config ui data integrations jira the jira settings are not accessble due to missing endpoint for jira s rest api proxy which was previously provided by backend but appears to be missing from main branch which will make jira settings unusable screenshots img width alt screen shot at pm src img width alt screen shot at pm src img width alt screen shot at pm src | 1 |
507,961 | 14,685,581,406 | IssuesEvent | 2021-01-01 10:06:12 | dhowe/ritaweb | https://api.github.com/repos/dhowe/ritaweb | closed | Port ReplaceableWriting example | priority: High | to Processing
from here: /ritaweb/www/examples/p5/ReplaceableWriting/
to here: /rita/examples/processing/ReplaceableWriting/ReplaceableWriting.pde

| 1.0 | Port ReplaceableWriting example - to Processing
from here: /ritaweb/www/examples/p5/ReplaceableWriting/
to here: /rita/examples/processing/ReplaceableWriting/ReplaceableWriting.pde

| priority | port replaceablewriting example to processing from here ritaweb www examples replaceablewriting to here rita examples processing replaceablewriting replaceablewriting pde | 1 |
827,667 | 31,791,771,769 | IssuesEvent | 2023-09-13 04:20:58 | WasmEdge/WasmEdge | https://api.github.com/repos/WasmEdge/WasmEdge | opened | CI: Use `fedora:rawhide` and `debian:testing` for the building process | priority:high c-CI | ## Motivation
See #2802
WasmEdge may have issues applying new compiler versions or the latest upstream dependencies. We must have a corresponding CI environment to validate to ensure that we won't break anything on such environments.
## Details
Create a workflow for the following environments:
- [ ] Fedora Rawhide
- [ ] Debian Testing | 1.0 | CI: Use `fedora:rawhide` and `debian:testing` for the building process - ## Motivation
See #2802
WasmEdge may have issues applying new compiler versions or the latest upstream dependencies. We must have a corresponding CI environment to validate to ensure that we won't break anything on such environments.
## Details
Create a workflow for the following environments:
- [ ] Fedora Rawhide
- [ ] Debian Testing | priority | ci use fedora rawhide and debian testing for the building process motivation see wasmedge may have issues applying new compiler versions or the latest upstream dependencies we must have a corresponding ci environment to validate to ensure that we won t break anything on such environments details create a workflow for the following environments fedora rawhide debian testing | 1 |
230,019 | 7,603,368,798 | IssuesEvent | 2018-04-29 13:51:21 | esaude/esaude-emr-poc | https://api.github.com/repos/esaude/esaude-emr-poc | closed | [Registration] Remove patient dialog messages are in english for a Pt config | High Priority bug | Actual Results
--
The message of remove patient dialog box is displaying in English even when the configured language for the session is Pt
Expected results
--
Should show the message in Portuguese
Steps to reproduce
--
Login > Registration module > Search and select a patient with visit history> Remover
Screenshot/Attachment (Optional)
--
A visual description of the unexpected behaviour.

| 1.0 | [Registration] Remove patient dialog messages are in english for a Pt config - Actual Results
--
The message of remove patient dialog box is displaying in English even when the configured language for the session is Pt
Expected results
--
Should show the message in Portuguese
Steps to reproduce
--
Login > Registration module > Search and select a patient with visit history> Remover
Screenshot/Attachment (Optional)
--
A visual description of the unexpected behaviour.

| priority | remove patient dialog messages are in english for a pt config actual results the message of remove patient dialog box is displaying in english even when the configured language for the session is pt expected results should show the message in portuguese steps to reproduce login registration module search and select a patient with visit history remover screenshot attachment optional a visual description of the unexpected behaviour | 1 |
35,596 | 2,791,327,430 | IssuesEvent | 2015-05-10 01:36:03 | chrislo27/ProjectMP | https://api.github.com/repos/chrislo27/ProjectMP | closed | TimeOfDay should have its own percentage in the enum | high priority refactor | Instead of time of day percentages in the main class, it should be per enum, calculated later. | 1.0 | TimeOfDay should have its own percentage in the enum - Instead of time of day percentages in the main class, it should be per enum, calculated later. | priority | timeofday should have its own percentage in the enum instead of time of day percentages in the main class it should be per enum calculated later | 1 |
355,542 | 10,581,973,020 | IssuesEvent | 2019-10-08 10:22:22 | red-hat-storage/ocs-ci | https://api.github.com/repos/red-hat-storage/ocs-ci | closed | Deployment constantly fails during must-gather logs collection | High Priority bug | Deployment constantly fails for me (3/3). Happens during the execution of this command, which hangs for more than 10 minutes:
22:27:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ebenahar/ebenahar-ocs-dir/auth/kubeconfig adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather --dest-dir=/tmp/failed_testcase_ocs_logs_1570214678/deployment_ocs_logs/ocs_must_gather
Access to http://pkgs.devel.redhat.com/cgit/containers/ocs-registry/plain/deploy-with-olm.yaml?h=ocs-4.2-rhel-8 is successful though
==========
Snippent:
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='pkgs.devel.redhat.com', port=80): Max retries exceeded with url: /cgit/containers/ocs-registry/plain/deploy-with-olm.yaml?h=ocs-4.2-rhel-8 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f008f3fa780>: Failed to establish a new connection: [Errno -2] Name or service not known'))
ocs-venv/lib64/python3.7/site-packages/requests/adapters.py:516: ConnectionError
=============
full stack traceback:
http://pastebin.test.redhat.com/803357
=============
git log:
commit e9f673f5f2cd0ee6f0cdab9325b3d04b5aa6617d (HEAD, ocs-ci/master, ebenahar/master)
Author: Petr Balogh <petr-balogh@users.noreply.github.com>
Date: Fri Oct 4 14:46:12 2019 +0200
Move to downstream (#839)
* Move to downstream
Fixes: #818
Signed-off-by: Petr Balogh <pbalogh@redhat.com>
| 1.0 | Deployment constantly fails during must-gather logs collection - Deployment constantly fails for me (3/3). Happens during the execution of this command, which hangs for more than 10 minutes:
22:27:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ebenahar/ebenahar-ocs-dir/auth/kubeconfig adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather --dest-dir=/tmp/failed_testcase_ocs_logs_1570214678/deployment_ocs_logs/ocs_must_gather
Access to http://pkgs.devel.redhat.com/cgit/containers/ocs-registry/plain/deploy-with-olm.yaml?h=ocs-4.2-rhel-8 is successful though
==========
Snippent:
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='pkgs.devel.redhat.com', port=80): Max retries exceeded with url: /cgit/containers/ocs-registry/plain/deploy-with-olm.yaml?h=ocs-4.2-rhel-8 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f008f3fa780>: Failed to establish a new connection: [Errno -2] Name or service not known'))
ocs-venv/lib64/python3.7/site-packages/requests/adapters.py:516: ConnectionError
=============
full stack traceback:
http://pastebin.test.redhat.com/803357
=============
git log:
commit e9f673f5f2cd0ee6f0cdab9325b3d04b5aa6617d (HEAD, ocs-ci/master, ebenahar/master)
Author: Petr Balogh <petr-balogh@users.noreply.github.com>
Date: Fri Oct 4 14:46:12 2019 +0200
Move to downstream (#839)
* Move to downstream
Fixes: #818
Signed-off-by: Petr Balogh <pbalogh@redhat.com>
| priority | deployment constantly fails during must gather logs collection deployment constantly fails for me happens during the execution of this command which hangs for more than minutes mainthread ocs ci utility utils info executing command oc kubeconfig home ebenahar ebenahar ocs dir auth kubeconfig adm must gather image quay io rhceph dev ocs must gather dest dir tmp failed testcase ocs logs deployment ocs logs ocs must gather access to is successful though snippent e requests exceptions connectionerror httpconnectionpool host pkgs devel redhat com port max retries exceeded with url cgit containers ocs registry plain deploy with olm yaml h ocs rhel caused by newconnectionerror failed to establish a new connection name or service not known ocs venv site packages requests adapters py connectionerror full stack traceback git log commit head ocs ci master ebenahar master author petr balogh date fri oct move to downstream move to downstream fixes signed off by petr balogh | 1 |
697,750 | 23,951,882,150 | IssuesEvent | 2022-09-12 12:13:54 | FactorioAntigrief/FactorioAntigrief | https://api.github.com/repos/FactorioAntigrief/FactorioAntigrief | closed | Add an option to replace the APIURL when generating the banlist | scope:community-bot priority:high type:feature | Add the option of replacing the APIURL when generating the banlist, so that the backend can be contacted through a localhost domain, but still show up as another (factoriobans.club) in the exported JSON file. | 1.0 | Add an option to replace the APIURL when generating the banlist - Add the option of replacing the APIURL when generating the banlist, so that the backend can be contacted through a localhost domain, but still show up as another (factoriobans.club) in the exported JSON file. | priority | add an option to replace the apiurl when generating the banlist add the option of replacing the apiurl when generating the banlist so that the backend can be contacted through a localhost domain but still show up as another factoriobans club in the exported json file | 1 |
618,516 | 19,472,693,621 | IssuesEvent | 2021-12-24 05:47:00 | bryntum/support | https://api.github.com/repos/bryntum/support | closed | MS Project export feature uses wrong UID type | bug resolved high-priority premium forum | MS Project export feature uses wrong UID type which leads to MS Project skips some exported tasks.
Reported here: https://www.bryntum.com/forum/viewtopic.php?p=98057#p98057 | 1.0 | MS Project export feature uses wrong UID type - MS Project export feature uses wrong UID type which leads to MS Project skips some exported tasks.
Reported here: https://www.bryntum.com/forum/viewtopic.php?p=98057#p98057 | priority | ms project export feature uses wrong uid type ms project export feature uses wrong uid type which leads to ms project skips some exported tasks reported here | 1 |
283,900 | 8,725,420,786 | IssuesEvent | 2018-12-10 09:22:05 | genetics-statistics/GEMMA | https://api.github.com/repos/genetics-statistics/GEMMA | closed | Different results using GEMMA 0.98 vs. GEMMA 0.96 | bug high priority lmm | Hi,
I have run the same data with GEMMA version 0.96 and version 0.98. The results are different (0.98 are incorrect) for some reason.
The command line:
```
$ gemma -bfile snps.plink -lmm 2 -k K -o g96 -maf 0.05 -miss 0.5 -n 1
Reading Files ...
$ gemma-0.98-linux-static -bfile snps.plink -lmm 2 -k K -o g98 -maf 0.05 -miss 0.5 -n 1
GEMMA 0.98 (2018-09-28) by Xiang Zhou and team (C) 2012-2018
Reading Files ...
```
Results:
```
output$ cat g98.assoc.txt | sort -gk10 | head
chr rs ps n_miss allele1 allele0 af logl_H1 l_mle p_lrt
2 . 216384 155 A G 0.060 -3.354064e+03 1.000000e+05 1.646839e-07
1 . 29609974 25 T A 0.094 -3.356651e+03 1.000000e+05 2.410560e-06
1 . 28431194 26 A G 0.114 -3.356769e+03 1.000000e+05 2.727838e-06
4 . 521246 262 G A 0.057 -3.357233e+03 1.000000e+05 4.423640e-06
1 . 29610020 34 C T 0.100 -3.358208e+03 1.000000e+05 1.226556e-05
3 . 10188537 280 A G 0.052 -3.358253e+03 1.000000e+05 1.285749e-05
3 . 11036701 428 T C 0.074 -3.358303e+03 1.000000e+05 1.354579e-05
1 . 29609929 27 T A 0.090 -3.358513e+03 1.000000e+05 1.688385e-05
2 . 4249391 146 G A 0.163 -3.358633e+03 1.000000e+05 1.914765e-05
output$ cat g96.assoc.txt | sort -gk9 | head
chr rs ps n_miss allele1 allele0 af l_mle p_lrt
5 . 18590327 24 A C 0.466 1.000000e+05 2.788929e-10
1 . 24339560 29 G T 0.433 1.000000e+05 3.300386e-10
5 . 18592889 33 T A 0.257 1.000000e+05 5.178977e-10
5 . 3188327 19 T C 0.231 1.000000e+05 7.064749e-10
5 . 23193229 212 T C 0.465 1.000000e+05 3.301685e-09
5 . 18600223 29 T G 0.498 1.000000e+05 3.936730e-09
5 . 23184412 168 G A 0.311 1.000000e+05 5.107127e-09
5 . 18590501 43 G C 0.163 1.000000e+05 5.329060e-09
5 . 18600802 84 A T 0.530 1.000000e+05 6.104173e-09
```
I am now uploading the data to my google drive and can share a link soon in a private mail.
Thanks for the help,
Yoav
| 1.0 | Different results using GEMMA 0.98 vs. GEMMA 0.96 - Hi,
I have run the same data with GEMMA version 0.96 and version 0.98. The results are different (0.98 are incorrect) for some reason.
The command line:
```
$ gemma -bfile snps.plink -lmm 2 -k K -o g96 -maf 0.05 -miss 0.5 -n 1
Reading Files ...
$ gemma-0.98-linux-static -bfile snps.plink -lmm 2 -k K -o g98 -maf 0.05 -miss 0.5 -n 1
GEMMA 0.98 (2018-09-28) by Xiang Zhou and team (C) 2012-2018
Reading Files ...
```
Results:
```
output$ cat g98.assoc.txt | sort -gk10 | head
chr rs ps n_miss allele1 allele0 af logl_H1 l_mle p_lrt
2 . 216384 155 A G 0.060 -3.354064e+03 1.000000e+05 1.646839e-07
1 . 29609974 25 T A 0.094 -3.356651e+03 1.000000e+05 2.410560e-06
1 . 28431194 26 A G 0.114 -3.356769e+03 1.000000e+05 2.727838e-06
4 . 521246 262 G A 0.057 -3.357233e+03 1.000000e+05 4.423640e-06
1 . 29610020 34 C T 0.100 -3.358208e+03 1.000000e+05 1.226556e-05
3 . 10188537 280 A G 0.052 -3.358253e+03 1.000000e+05 1.285749e-05
3 . 11036701 428 T C 0.074 -3.358303e+03 1.000000e+05 1.354579e-05
1 . 29609929 27 T A 0.090 -3.358513e+03 1.000000e+05 1.688385e-05
2 . 4249391 146 G A 0.163 -3.358633e+03 1.000000e+05 1.914765e-05
output$ cat g96.assoc.txt | sort -gk9 | head
chr rs ps n_miss allele1 allele0 af l_mle p_lrt
5 . 18590327 24 A C 0.466 1.000000e+05 2.788929e-10
1 . 24339560 29 G T 0.433 1.000000e+05 3.300386e-10
5 . 18592889 33 T A 0.257 1.000000e+05 5.178977e-10
5 . 3188327 19 T C 0.231 1.000000e+05 7.064749e-10
5 . 23193229 212 T C 0.465 1.000000e+05 3.301685e-09
5 . 18600223 29 T G 0.498 1.000000e+05 3.936730e-09
5 . 23184412 168 G A 0.311 1.000000e+05 5.107127e-09
5 . 18590501 43 G C 0.163 1.000000e+05 5.329060e-09
5 . 18600802 84 A T 0.530 1.000000e+05 6.104173e-09
```
I am now uploading the data to my google drive and can share a link soon in a private mail.
Thanks for the help,
Yoav
| priority | different results using gemma vs gemma hi i have run the same data with gemma version and version the results are different are incorrect for some reason the command line gemma bfile snps plink lmm k k o maf miss n reading files gemma linux static bfile snps plink lmm k k o maf miss n gemma by xiang zhou and team c reading files results output cat assoc txt sort head chr rs ps n miss af logl l mle p lrt a g t a a g g a c t a g t c t a g a output cat assoc txt sort head chr rs ps n miss af l mle p lrt a c g t t a t c t c t g g a g c a t i am now uploading the data to my google drive and can share a link soon in a private mail thanks for the help yoav | 1 |
748,604 | 26,128,891,406 | IssuesEvent | 2022-12-28 23:50:48 | andrefdre/Dora_the_mug_finder_SAVI | https://api.github.com/repos/andrefdre/Dora_the_mug_finder_SAVI | closed | Obtaining a 2D image of objects for classification | enhancement high_priority | To get the 2D image you can use ```image = vis.capture_screen_float_buffer()```.
http://www.open3d.org/docs/release/tutorial/visualization/customized_visualization.html
- Make camera positioning automatic (object center or corner, both points are already calculated and saved in the objects dictionary).
- See if you can get the image without having to open the preview. | 1.0 | Obtaining a 2D image of objects for classification - To get the 2D image you can use ```image = vis.capture_screen_float_buffer()```.
http://www.open3d.org/docs/release/tutorial/visualization/customized_visualization.html
- Make camera positioning automatic (object center or corner, both points are already calculated and saved in the objects dictionary).
- See if you can get the image without having to open the preview. | priority | obtaining a image of objects for classification to get the image you can use image vis capture screen float buffer make camera positioning automatic object center or corner both points are already calculated and saved in the objects dictionary see if you can get the image without having to open the preview | 1 |
243,625 | 7,860,039,060 | IssuesEvent | 2018-06-21 18:34:38 | VulcanForge/pvp-mode | https://api.github.com/repos/VulcanForge/pvp-mode | opened | Fix that hired units can attack each other of on of the owners isn't logged in | breaking change bug compatibility high priority | That's because MC then cannot obtain `EntityPlayer` instances for them. So the data needs to be accessed independent of the player entities. | 1.0 | Fix that hired units can attack each other of on of the owners isn't logged in - That's because MC then cannot obtain `EntityPlayer` instances for them. So the data needs to be accessed independent of the player entities. | priority | fix that hired units can attack each other of on of the owners isn t logged in that s because mc then cannot obtain entityplayer instances for them so the data needs to be accessed independent of the player entities | 1 |
642,131 | 20,868,095,161 | IssuesEvent | 2022-03-22 09:22:07 | darrylsyms/fretwise-app | https://api.github.com/repos/darrylsyms/fretwise-app | closed | Video Embeds Presentation (Vimeo Block) | enhancement status in progress hook request high priority | Video embeds look ancient. At the very least, **rounded corners** are necessary to match the “modern” theme of the app. A secondary improvement would be for the video to stretch to the full width of the Screen.
This could potentially be change via a hook. Though I believe that this would be an excellent feature by default for all customers. | 1.0 | Video Embeds Presentation (Vimeo Block) - Video embeds look ancient. At the very least, **rounded corners** are necessary to match the “modern” theme of the app. A secondary improvement would be for the video to stretch to the full width of the Screen.
This could potentially be change via a hook. Though I believe that this would be an excellent feature by default for all customers. | priority | video embeds presentation vimeo block video embeds look ancient at the very least rounded corners are necessary to match the “modern” theme of the app a secondary improvement would be for the video to stretch to the full width of the screen this could potentially be change via a hook though i believe that this would be an excellent feature by default for all customers | 1 |
422,697 | 12,287,282,888 | IssuesEvent | 2020-05-09 11:25:58 | jabranr/covidonation | https://api.github.com/repos/jabranr/covidonation | closed | Rebase CI branch with master on release | enhancement high priority | Rebase CI branch with master on a release so that `develop` branch remains up-to-date. | 1.0 | Rebase CI branch with master on release - Rebase CI branch with master on a release so that `develop` branch remains up-to-date. | priority | rebase ci branch with master on release rebase ci branch with master on a release so that develop branch remains up to date | 1 |
142,911 | 5,479,833,780 | IssuesEvent | 2017-03-13 04:26:48 | status-im/status-go | https://api.github.com/repos/status-im/status-go | closed | Find solution to blocking `eth` calls (during sync) | high-priority ready | **Problem**
- `eth` is not responsive up until sync is done
- tested in `geth --light --testnet console`, no `eth` calls is possible if LES headers sync is in progress
**Notes:**
- this issue is claimed to be solved by https://github.com/ethereum/go-ethereum/pull/3519 (but it isn't)
- issue introduced in https://github.com/ethereum/go-ethereum/commit/c57c54ce96628aeb6345776310123a80593f0143
**Resolution:**
- create bug fix request by describing the issue
- briefly review if solution can be found, if so then file PR to that bug fix request
| 1.0 | Find solution to blocking `eth` calls (during sync) - **Problem**
- `eth` is not responsive up until sync is done
- tested in `geth --light --testnet console`, no `eth` calls is possible if LES headers sync is in progress
**Notes:**
- this issue is claimed to be solved by https://github.com/ethereum/go-ethereum/pull/3519 (but it isn't)
- issue introduced in https://github.com/ethereum/go-ethereum/commit/c57c54ce96628aeb6345776310123a80593f0143
**Resolution:**
- create bug fix request by describing the issue
- briefly review if solution can be found, if so then file PR to that bug fix request
| priority | find solution to blocking eth calls during sync problem eth is not responsive up until sync is done tested in geth light testnet console no eth calls is possible if les headers sync is in progress notes this issue is claimed to be solved by but it isn t issue introduced in resolution create bug fix request by describing the issue briefly review if solution can be found if so then file pr to that bug fix request | 1 |
300,543 | 9,211,359,021 | IssuesEvent | 2019-03-09 14:44:42 | qgisissuebot/QGIS | https://api.github.com/repos/qgisissuebot/QGIS | closed | QGIS Crashed | Bug Priority: high | ---
Author Name: **Frank Schulze** (Frank Schulze)
Original Redmine Issue: 21483, https://issues.qgis.org/issues/21483
Original Date: 2019-03-05T16:48:04.808Z
---
Crash ID: 5ad41d6be94809f981e2ec9b06a0e20f1d0b0a87
Stack Trace
proj_lpz_dist :
proj_lpz_dist :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::QgsCoordinateTransform :
QHashData::free_helper :
QgsCoordinateTransform::addToCache :
QgsCoordinateTransform::invalidateCache :
QgsApplication::exitQgis :
QgisApp::~QgisApp :
CPLStringList::operator char * __ptr64 * __ptr64 :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.4.4-Madeira
QGIS code revision: f6ddc62fdb
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.4.0
Running against GDAL: 2.4.0
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17134
| 1.0 | QGIS Crashed - ---
Author Name: **Frank Schulze** (Frank Schulze)
Original Redmine Issue: 21483, https://issues.qgis.org/issues/21483
Original Date: 2019-03-05T16:48:04.808Z
---
Crash ID: 5ad41d6be94809f981e2ec9b06a0e20f1d0b0a87
Stack Trace
proj_lpz_dist :
proj_lpz_dist :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::QgsCoordinateTransform :
QHashData::free_helper :
QgsCoordinateTransform::addToCache :
QgsCoordinateTransform::invalidateCache :
QgsApplication::exitQgis :
QgisApp::~QgisApp :
CPLStringList::operator char * __ptr64 * __ptr64 :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.4.4-Madeira
QGIS code revision: f6ddc62fdb
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.4.0
Running against GDAL: 2.4.0
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17134
| priority | qgis crashed author name frank schulze frank schulze original redmine issue original date crash id stack trace proj lpz dist proj lpz dist qgscoordinatetransform transformpolygon qgscoordinatetransform transformpolygon qgscoordinatetransform qgscoordinatetransform qhashdata free helper qgscoordinatetransform addtocache qgscoordinatetransform invalidatecache qgsapplication exitqgis qgisapp qgisapp cplstringlist operator char main basethreadinitthunk rtluserthreadstart qgis info qgis version madeira qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version | 1 |
721,842 | 24,839,669,112 | IssuesEvent | 2022-10-26 11:44:12 | AY2223S1-CS2113-T17-3/tp | https://api.github.com/repos/AY2223S1-CS2113-T17-3/tp | closed | Timetable building feature and auto allocation of lessons | type.Epic priority.High | Builds a timetable with all current inputs, auto allocates lessons of each module and avoid clashes of lessons. | 1.0 | Timetable building feature and auto allocation of lessons - Builds a timetable with all current inputs, auto allocates lessons of each module and avoid clashes of lessons. | priority | timetable building feature and auto allocation of lessons builds a timetable with all current inputs auto allocates lessons of each module and avoid clashes of lessons | 1 |
416,908 | 12,152,468,948 | IssuesEvent | 2020-04-24 22:21:28 | kkkmail/ClmFSharp | https://api.github.com/repos/kkkmail/ClmFSharp | opened | Out of sync messages stall Messaging client | Priority: High Type: bug | If due to whatever reason some message cannot be successfully processed (for example message with disallowed status update to run queue), then the messages get stuck forever.
Find a solution and then implement it. | 1.0 | Out of sync messages stall Messaging client - If due to whatever reason some message cannot be successfully processed (for example message with disallowed status update to run queue), then the messages get stuck forever.
Find a solution and then implement it. | priority | out of sync messages stall messaging client if due to whatever reason some message cannot be successfully processed for example message with disallowed status update to run queue then the messages get stuck forever find a solution and then implement it | 1 |
89,405 | 3,793,607,437 | IssuesEvent | 2016-03-22 14:30:23 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | User file with zone and system ERV crashes | Priority S1 - High | Helpdesk ticket 11114
I had this issue running Open Studio: when adding a ERV HX at system level, I didn't delete the ERV at the zone level, and that running failed, with the energyplus.err file doesn't even show error message. So I check the IDF and see the energyplus crashed, with screenshot and IDF attached.
| 1.0 | User file with zone and system ERV crashes - Helpdesk ticket 11114
I had this issue running Open Studio: when adding a ERV HX at system level, I didn't delete the ERV at the zone level, and that running failed, with the energyplus.err file doesn't even show error message. So I check the IDF and see the energyplus crashed, with screenshot and IDF attached.
| priority | user file with zone and system erv crashes helpdesk ticket i had this issue running open studio when adding a erv hx at system level i didn t delete the erv at the zone level and that running failed with the energyplus err file doesn t even show error message so i check the idf and see the energyplus crashed with screenshot and idf attached | 1 |
69,379 | 3,298,037,254 | IssuesEvent | 2015-11-02 12:26:42 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | closed | Rationalize the different ways we display dataset search results | DatasetLists Priority-High | We have at least 6 different ways of displaying search results. It would be good to reduce that number and standardize:
1. Display of icons (popular, quality checked, subnational, deleted, private)
1. Interface for filters (we have two ways: left sidebar and top drawer)
1. Where private/deleted datasets are shown to authorized users (deleted datasts not shown in standard search result and custom org, but shown on default org and dashboard; private datasets not shown on standard search result, but shown on custom/default org and dashboard)
1. If an authorized user has buttons for edit/delete on each item in the list (they are there on the dashboard, but not on the org page, for example)
1. Make pagination consistent (where it makes sense to do so).

| 1.0 | Rationalize the different ways we display dataset search results - We have at least 6 different ways of displaying search results. It would be good to reduce that number and standardize:
1. Display of icons (popular, quality checked, subnational, deleted, private)
1. Interface for filters (we have two ways: left sidebar and top drawer)
1. Where private/deleted datasets are shown to authorized users (deleted datasts not shown in standard search result and custom org, but shown on default org and dashboard; private datasets not shown on standard search result, but shown on custom/default org and dashboard)
1. If an authorized user has buttons for edit/delete on each item in the list (they are there on the dashboard, but not on the org page, for example)
1. Make pagination consistent (where it makes sense to do so).

| priority | rationalize the different ways we display dataset search results we have at least different ways of displaying search results it would be good to reduce that number and standardize display of icons popular quality checked subnational deleted private interface for filters we have two ways left sidebar and top drawer where private deleted datasets are shown to authorized users deleted datasts not shown in standard search result and custom org but shown on default org and dashboard private datasets not shown on standard search result but shown on custom default org and dashboard if an authorized user has buttons for edit delete on each item in the list they are there on the dashboard but not on the org page for example make pagination consistent where it makes sense to do so | 1 |
814,097 | 30,487,066,129 | IssuesEvent | 2023-07-18 03:46:03 | arwes/arwes | https://api.github.com/repos/arwes/arwes | closed | Define strategy to extend core components styles | complexity: high type: feature package: core priority: low | Core components do not support internal elements styles extension. To properly customize components, a simple way to extend elements styles is required.
A possible approach could be to provide a component prop `styles?: Record<string, CSSObject>` to merge with the internal built-in styles. The following features would also be useful:
- Theme and component props access.
- Media Queries deep merge/extension.
- Simple React prop internal manipulation and no HOCs.
| 1.0 | Define strategy to extend core components styles - Core components do not support internal elements styles extension. To properly customize components, a simple way to extend elements styles is required.
A possible approach could be to provide a component prop `styles?: Record<string, CSSObject>` to merge with the internal built-in styles. The following features would also be useful:
- Theme and component props access.
- Media Queries deep merge/extension.
- Simple React prop internal manipulation and no HOCs.
| priority | define strategy to extend core components styles core components do not support internal elements styles extension to properly customize components a simple way to extend elements styles is required a possible approach could be to provide a component prop styles record to merge with the internal built in styles the following features would also be useful theme and component props access media queries deep merge extension simple react prop internal manipulation and no hocs | 1 |
641,991 | 20,864,255,163 | IssuesEvent | 2022-03-22 04:26:59 | csarofeen/pytorch | https://api.github.com/repos/csarofeen/pytorch | closed | codegen uses undefined identifier | high priority Functorch | ### 🐛 Describe the bug
codegen error generated from simple view+native_layer_norm_backward.
Repro script:
```
import torch
inps = [(torch.Size([768]), torch.float32), (torch.Size([768]), torch.float32), (torch.Size([4, 512, 768]), torch.float32), (torch.Size([4, 512, 1]), torch.float32), (torch.Size([4, 512, 1]), torch.float32), (torch.Size([2048, 768]), torch.float32)]
inps = [torch.ones(shape, dtype=dtype, device='cuda') for (shape, dtype) in inps]
def forward(primals_5, primals_6, primals_15, getitem_2, getitem_1, mm_6):
view_19 = torch.ops.aten.view(mm_6, [4, 512, 768])
native_layer_norm_backward_1 = torch.ops.aten.native_layer_norm_backward(view_19, primals_15, [768], getitem_1, getitem_2, primals_6, primals_5, [True, True, True])
return (native_layer_norm_backward_1,)
f = torch.jit.script(forward)
with torch.jit.fuser("fuser2"):
for _ in range(5):
f(*inps)
```
Run script with `PYTORCH_NVFUSER_DISABLE_FALLBACK=1`
Generated error log:
```
CUDA NVRTC compile error: default_program(3358): error: identifier "i11" is undefined
```
where `i11` was used earlier in a predicate
```
if (((i221 < T8.size[2]) && (((((((((nvfuser_index_t)blockIdx.y) * (ceilDiv((ceilDiv((ceilDiv((ceilDiv((T8.size[0] * T8.size[1]), ((nvfuser_index_t)blockDim.y))), 4)), 1)), ((nvfuser_index_t)gridDim.y)))) + i174) * 4) + (i179 + nvfuser_zero)) * ((nvfuser_index_t)blockDim.y)) + ((nvfuser_index_t)threadIdx.y)) < (4 * (ceilDiv(i11, 4)))))) {
```
full log attached
[repro.txt](https://github.com/csarofeen/pytorch/files/8165743/repro.txt)
### Versions
Reproed on ToT devel. Looks like a real issue. | 1.0 | codegen uses undefined identifier - ### 🐛 Describe the bug
codegen error generated from simple view+native_layer_norm_backward.
Repro script:
```
import torch
inps = [(torch.Size([768]), torch.float32), (torch.Size([768]), torch.float32), (torch.Size([4, 512, 768]), torch.float32), (torch.Size([4, 512, 1]), torch.float32), (torch.Size([4, 512, 1]), torch.float32), (torch.Size([2048, 768]), torch.float32)]
inps = [torch.ones(shape, dtype=dtype, device='cuda') for (shape, dtype) in inps]
def forward(primals_5, primals_6, primals_15, getitem_2, getitem_1, mm_6):
view_19 = torch.ops.aten.view(mm_6, [4, 512, 768])
native_layer_norm_backward_1 = torch.ops.aten.native_layer_norm_backward(view_19, primals_15, [768], getitem_1, getitem_2, primals_6, primals_5, [True, True, True])
return (native_layer_norm_backward_1,)
f = torch.jit.script(forward)
with torch.jit.fuser("fuser2"):
for _ in range(5):
f(*inps)
```
Run script with `PYTORCH_NVFUSER_DISABLE_FALLBACK=1`
Generated error log:
```
CUDA NVRTC compile error: default_program(3358): error: identifier "i11" is undefined
```
where `i11` was used earlier in a predicate
```
if (((i221 < T8.size[2]) && (((((((((nvfuser_index_t)blockIdx.y) * (ceilDiv((ceilDiv((ceilDiv((ceilDiv((T8.size[0] * T8.size[1]), ((nvfuser_index_t)blockDim.y))), 4)), 1)), ((nvfuser_index_t)gridDim.y)))) + i174) * 4) + (i179 + nvfuser_zero)) * ((nvfuser_index_t)blockDim.y)) + ((nvfuser_index_t)threadIdx.y)) < (4 * (ceilDiv(i11, 4)))))) {
```
full log attached
[repro.txt](https://github.com/csarofeen/pytorch/files/8165743/repro.txt)
### Versions
Reproed on ToT devel. Looks like a real issue. | priority | codegen uses undefined identifier 🐛 describe the bug codegen error generated from simple view native layer norm backward repro script import torch inps torch torch size torch torch size torch torch size torch torch size torch torch size torch inps def forward primals primals primals getitem getitem mm view torch ops aten view mm native layer norm backward torch ops aten native layer norm backward view primals getitem getitem primals primals return native layer norm backward f torch jit script forward with torch jit fuser for in range f inps run script with pytorch nvfuser disable fallback generated error log cuda nvrtc compile error default program error identifier is undefined where was used earlier in a predicate if size nvfuser index t blockidx y ceildiv ceildiv ceildiv ceildiv size size nvfuser index t blockdim y nvfuser index t griddim y nvfuser zero nvfuser index t blockdim y nvfuser index t threadidx y ceildiv full log attached versions reproed on tot devel looks like a real issue | 1 |
69,842 | 3,315,908,116 | IssuesEvent | 2015-11-06 14:41:13 | jgirald/ES2015C | https://api.github.com/repos/jgirald/ES2015C | closed | Instantiate the resources (Wood, Gold, Food) | High Priority Map Task Team C | Estimated effort: 3 hours
Product Backlog (Map): As a player, I want to find (finite but enough) resources on the map, so that I can collect them. [High priority]
Ending condition: the different resources class has to be created and ready to use.
| 1.0 | Instantiate the resources (Wood, Gold, Food) - Estimated effort: 3 hours
Product Backlog (Map): As a player, I want to find (finite but enough) resources on the map, so that I can collect them. [High priority]
Ending condition: the different resources class has to be created and ready to use.
| priority | instantiate the resources wood gold food estimated effort hours product backlog map as a player i want to find finite but enough resources on the map so that i can collect them ending condition the different resources class has to be created and ready to use | 1 |
256,648 | 8,128,199,949 | IssuesEvent | 2018-08-17 10:49:29 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Provide strict CMake option to stop on first failure | Expected Use: 3 - Occasional Feature Impact: 4 - High Priority: Normal | When CMake'ing VisIt fails in some fundamental way, the error often scrolls off the screen and processing continues such that user is unaware of issue.
I propose adding a strict mode...
I wonder...is there a way to invoke CMake to die on first warning? I looked briefly and didn't find support for this in CMake itself. If not, maybe a way to globally set MESSAGE command's message type?
One issue I see in top-level CMakeLists.txt is that we use MESSAGE *a*lot* with type of STATUS even when issuing failure (e.g. NOT FOUND) messages.
But, I think we *could* maybe do this...
· Leave all FATAL_ERROR messages unchanged
· For all STATUS messages indicating an ignorable failure (e.g. NOT FOUND) change those to use GLOBAL_ERROR_MODE
· For all WARNING messages, changes those to use GLOBAL_ERROR_MODE
· For all AUTHOR_WARNING messages (don't think we have any of these) changes those to use GLOBAL_ERROR_MODE
· For all SEND_ERROR messages, changed those to GLOBAL_ERROR_MODE
By default, set GLOBAL_ERROR_MODE to WARNING
But, if -DWERROR:BOOL=ON, that will set GLOBAL_ERROR_MODE to FATAL_ERROR
In theory, then, running CMake with -DWERROR:BOOL=ON would have the effect of terminating immediately upon first less-than-ideal result.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2745
Status: Rejected
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Provide strict CMake option to stop on first failure
Assigned to:
Category:
Target version:
Author: Mark Miller
Start: 01/26/2017
Due date:
% Done: 0
Estimated time:
Created: 01/26/2017 01:28 pm
Updated: 02/14/2017 06:31 pm
Likelihood:
Severity:
Found in version:
Impact: 4 - High
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
When CMake'ing VisIt fails in some fundamental way, the error often scrolls off the screen and processing continues such that user is unaware of issue.
I propose adding a strict mode...
I wonder...is there a way to invoke CMake to die on first warning? I looked briefly and didn't find support for this in CMake itself. If not, maybe a way to globally set MESSAGE command's message type?
One issue I see in top-level CMakeLists.txt is that we use MESSAGE *a*lot* with type of STATUS even when issuing failure (e.g. NOT FOUND) messages.
But, I think we *could* maybe do this...
· Leave all FATAL_ERROR messages unchanged
· For all STATUS messages indicating an ignorable failure (e.g. NOT FOUND) change those to use GLOBAL_ERROR_MODE
· For all WARNING messages, changes those to use GLOBAL_ERROR_MODE
· For all AUTHOR_WARNING messages (don't think we have any of these) changes those to use GLOBAL_ERROR_MODE
· For all SEND_ERROR messages, changed those to GLOBAL_ERROR_MODE
By default, set GLOBAL_ERROR_MODE to WARNING
But, if -DWERROR:BOOL=ON, that will set GLOBAL_ERROR_MODE to FATAL_ERROR
In theory, then, running CMake with -DWERROR:BOOL=ON would have the effect of terminating immediately upon first less-than-ideal result.
Comments:
| 1.0 | Provide strict CMake option to stop on first failure - When CMake'ing VisIt fails in some fundamental way, the error often scrolls off the screen and processing continues such that user is unaware of issue.
I propose adding a strict mode...
I wonder...is there a way to invoke CMake to die on first warning? I looked briefly and didn't find support for this in CMake itself. If not, maybe a way to globally set MESSAGE command's message type?
One issue I see in top-level CMakeLists.txt is that we use MESSAGE *a*lot* with type of STATUS even when issuing failure (e.g. NOT FOUND) messages.
But, I think we *could* maybe do this...
· Leave all FATAL_ERROR messages unchanged
· For all STATUS messages indicating an ignorable failure (e.g. NOT FOUND) change those to use GLOBAL_ERROR_MODE
· For all WARNING messages, changes those to use GLOBAL_ERROR_MODE
· For all AUTHOR_WARNING messages (don't think we have any of these) changes those to use GLOBAL_ERROR_MODE
· For all SEND_ERROR messages, changed those to GLOBAL_ERROR_MODE
By default, set GLOBAL_ERROR_MODE to WARNING
But, if -DWERROR:BOOL=ON, that will set GLOBAL_ERROR_MODE to FATAL_ERROR
In theory, then, running CMake with -DWERROR:BOOL=ON would have the effect of terminating immediately upon first less-than-ideal result.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2745
Status: Rejected
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Provide strict CMake option to stop on first failure
Assigned to:
Category:
Target version:
Author: Mark Miller
Start: 01/26/2017
Due date:
% Done: 0
Estimated time:
Created: 01/26/2017 01:28 pm
Updated: 02/14/2017 06:31 pm
Likelihood:
Severity:
Found in version:
Impact: 4 - High
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
When CMake'ing VisIt fails in some fundamental way, the error often scrolls off the screen and processing continues such that user is unaware of issue.
I propose adding a strict mode...
I wonder...is there a way to invoke CMake to die on first warning? I looked briefly and didn't find support for this in CMake itself. If not, maybe a way to globally set MESSAGE command's message type?
One issue I see in top-level CMakeLists.txt is that we use MESSAGE *a*lot* with type of STATUS even when issuing failure (e.g. NOT FOUND) messages.
But, I think we *could* maybe do this...
· Leave all FATAL_ERROR messages unchanged
· For all STATUS messages indicating an ignorable failure (e.g. NOT FOUND) change those to use GLOBAL_ERROR_MODE
· For all WARNING messages, changes those to use GLOBAL_ERROR_MODE
· For all AUTHOR_WARNING messages (don't think we have any of these) changes those to use GLOBAL_ERROR_MODE
· For all SEND_ERROR messages, changed those to GLOBAL_ERROR_MODE
By default, set GLOBAL_ERROR_MODE to WARNING
But, if -DWERROR:BOOL=ON, that will set GLOBAL_ERROR_MODE to FATAL_ERROR
In theory, then, running CMake with -DWERROR:BOOL=ON would have the effect of terminating immediately upon first less-than-ideal result.
Comments:
| priority | provide strict cmake option to stop on first failure when cmake ing visit fails in some fundamental way the error often scrolls off the screen and processing continues such that user is unaware of issue i propose adding a strict mode i wonder is there a way to invoke cmake to die on first warning i looked briefly and didn t find support for this in cmake itself if not maybe a way to globally set message command s message type one issue i see in top level cmakelists txt is that we use message a lot with type of status even when issuing failure e g not found messages but i think we could maybe do this · leave all fatal error messages unchanged · for all status messages indicating an ignorable failure e g not found change those to use global error mode · for all warning messages changes those to use global error mode · for all author warning messages don t think we have any of these changes those to use global error mode · for all send error messages changed those to global error mode by default set global error mode to warning but if dwerror bool on that will set global error mode to fatal error in theory then running cmake with dwerror bool on would have the effect of terminating immediately upon first less than ideal result redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status rejected project visit tracker feature priority normal subject provide strict cmake option to stop on first failure assigned to category target version author mark miller start due date done estimated time created pm updated pm likelihood severity found in version impact high expected use occasional os all support group any description when cmake ing visit fails in some fundamental way the error often scrolls off the screen and processing continues such that user is unaware of issue i propose adding a strict mode i wonder is there a way to invoke cmake to die on first warning i looked briefly and didn t find support for this in cmake itself if not maybe a way to globally set message command s message type one issue i see in top level cmakelists txt is that we use message a lot with type of status even when issuing failure e g not found messages but i think we could maybe do this · leave all fatal error messages unchanged · for all status messages indicating an ignorable failure e g not found change those to use global error mode · for all warning messages changes those to use global error mode · for all author warning messages don t think we have any of these changes those to use global error mode · for all send error messages changed those to global error mode by default set global error mode to warning but if dwerror bool on that will set global error mode to fatal error in theory then running cmake with dwerror bool on would have the effect of terminating immediately upon first less than ideal result comments | 1 |
770,069 | 27,028,079,450 | IssuesEvent | 2023-02-11 21:27:27 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | Consider forcing test_package not use configuration | type: feature stage: queue priority: high complex: medium | Right now, the test_package/conanfile.py can define options and other configuration, that change the way the package is created. So a ``conan create`` behaves differently to a ``conan export`` + ``conan install --build=missing`` + ``conan test test_package``.
The dependency graph is computed from the test_package/conanfile, which adds some complexity to the code, too, would be nice to simplify it.
This would be breaking with some test_package/conanfile.py, but won't break the recipes themselves, only the ``conan create`` flow, and should be difficult to fix. But conan 2.0.
| 1.0 | Consider forcing test_package not use configuration - Right now, the test_package/conanfile.py can define options and other configuration, that change the way the package is created. So a ``conan create`` behaves differently to a ``conan export`` + ``conan install --build=missing`` + ``conan test test_package``.
The dependency graph is computed from the test_package/conanfile, which adds some complexity to the code, too, would be nice to simplify it.
This would be breaking with some test_package/conanfile.py, but won't break the recipes themselves, only the ``conan create`` flow, and should be difficult to fix. But conan 2.0.
| priority | consider forcing test package not use configuration right now the test package conanfile py can define options and other configuration that change the way the package is created so a conan create behaves differently to a conan export conan install build missing conan test test package the dependency graph is computed from the test package conanfile which adds some complexity to the code too would be nice to simplify it this would be breaking with some test package conanfile py but won t break the recipes themselves only the conan create flow and should be difficult to fix but conan | 1 |
164,499 | 6,227,298,751 | IssuesEvent | 2017-07-10 20:27:26 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Add Publishing Status | enhancement Priority: High | Please add an icon to the top row of icons while in site view mode to the right of Search. The icon class is: `fa-cloud-upload` with text description "Publishing Status"
The icon color is based on the return of this API call:
http://docs.craftercms.org/en/3.0/developers/projects/studio/api/publish/status.html
Color scheme, if the returned `"status"` is:
- `"idle"` icon color is unchanged / normal
- `"busy"` icon color is orange
- `"stopped"` icon color is red
- `"unknown"` icon color is black and message is the exception if 3xx, 4xx, 5xx or unreachable
The API will be invoked every 60 seconds (like ticket validation) and the UI is updated.
If the user clicks on the icon, please show the `"message"` field of the last returned API call.
Ping me if you have any questions.

| 1.0 | [studio-ui] Add Publishing Status - Please add an icon to the top row of icons while in site view mode to the right of Search. The icon class is: `fa-cloud-upload` with text description "Publishing Status"
The icon color is based on the return of this API call:
http://docs.craftercms.org/en/3.0/developers/projects/studio/api/publish/status.html
Color scheme, if the returned `"status"` is:
- `"idle"` icon color is unchanged / normal
- `"busy"` icon color is orange
- `"stopped"` icon color is red
- `"unknown"` icon color is black and message is the exception if 3xx, 4xx, 5xx or unreachable
The API will be invoked every 60 seconds (like ticket validation) and the UI is updated.
If the user clicks on the icon, please show the `"message"` field of the last returned API call.
Ping me if you have any questions.

| priority | add publishing status please add an icon to the top row of icons while in site view mode to the right of search the icon class is fa cloud upload with text description publishing status the icon color is based on the return of this api call color scheme if the returned status is idle icon color is unchanged normal busy icon color is orange stopped icon color is red unknown icon color is black and message is the exception if or unreachable the api will be invoked every seconds like ticket validation and the ui is updated if the user clicks on the icon please show the message field of the last returned api call ping me if you have any questions | 1 |
396,512 | 11,709,730,131 | IssuesEvent | 2020-03-08 20:30:03 | open-gunz/source | https://api.github.com/repos/open-gunz/source | opened | Fix TDM Mode | High Priority | Team Deathmatch round ending even when some players are alive on both teams. This occurs directly after a certain player dies (similar to assassination mode when it ends when the boss dies). | 1.0 | Fix TDM Mode - Team Deathmatch round ending even when some players are alive on both teams. This occurs directly after a certain player dies (similar to assassination mode when it ends when the boss dies). | priority | fix tdm mode team deathmatch round ending even when some players are alive on both teams this occurs directly after a certain player dies similar to assassination mode when it ends when the boss dies | 1 |
465,634 | 13,389,484,936 | IssuesEvent | 2020-09-02 18:57:39 | mathedjoe/animaltracker | https://api.github.com/repos/mathedjoe/animaltracker | closed | Getting Elevation from AWS with Limited Memory | high-priority | 1) Was there a way we can reduce the amount of memory dedicated to finding elevation at Zoom 11 from AWS?
2) I upload a .zip with 10 files. When the file uploads into Animal Tracker, it only selects five animals. I click on the "Select All" button--to select all cows--and then click on Zoom 11 and "Process". Animal Tracker is not able to process all 10 files at Zoom 11.
[RiversideSaddleButte2018_Spring.zip](https://github.com/mathedjoe/animaltracker/files/4285165/RiversideSaddleButte2018_Spring.zip)
3) Time: Please include (hh:mm:ss) to indicate time format.
-The time is not intuitive. I try to change it and my screen goes green. | 1.0 | Getting Elevation from AWS with Limited Memory - 1) Was there a way we can reduce the amount of memory dedicated to finding elevation at Zoom 11 from AWS?
2) I upload a .zip with 10 files. When the file uploads into Animal Tracker, it only selects five animals. I click on the "Select All" button--to select all cows--and then click on Zoom 11 and "Process". Animal Tracker is not able to process all 10 files at Zoom 11.
[RiversideSaddleButte2018_Spring.zip](https://github.com/mathedjoe/animaltracker/files/4285165/RiversideSaddleButte2018_Spring.zip)
3) Time: Please include (hh:mm:ss) to indicate time format.
-The time is not intuitive. I try to change it and my screen goes green. | priority | getting elevation from aws with limited memory was there a way we can reduce the amount of memory dedicated to finding elevation at zoom from aws i upload a zip with files when the file uploads into animal tracker it only selects five animals i click on the select all button to select all cows and then click on zoom and process animal tracker is not able to process all files at zoom time please include hh mm ss to indicate time format the time is not intuitive i try to change it and my screen goes green | 1 |
323,392 | 9,854,137,212 | IssuesEvent | 2019-06-19 16:09:00 | geosolutions-it/MapStore2-C098 | https://api.github.com/repos/geosolutions-it/MapStore2-C098 | closed | List existing Missions | Frontend Priority: High Project: C098 SCIADRO | List existing Missions within the selected Asset with the possibility to select one of them to see the mission path on the map | 1.0 | List existing Missions - List existing Missions within the selected Asset with the possibility to select one of them to see the mission path on the map | priority | list existing missions list existing missions within the selected asset with the possibility to select one of them to see the mission path on the map | 1 |
635,537 | 20,405,462,754 | IssuesEvent | 2022-02-23 04:28:43 | CoEDL/nyingarn-workspace | https://api.github.com/repos/CoEDL/nyingarn-workspace | closed | Add capability to invite users to see my item | enhancement priority-high user-request | A user needs to be able to invite another user in the system to see their item. As there are no roles, all users have equal access to the item (do we need roles in future?).
Users must provide exact name or email to find others in the system to prevent data leaks. | 1.0 | Add capability to invite users to see my item - A user needs to be able to invite another user in the system to see their item. As there are no roles, all users have equal access to the item (do we need roles in future?).
Users must provide exact name or email to find others in the system to prevent data leaks. | priority | add capability to invite users to see my item a user needs to be able to invite another user in the system to see their item as there are no roles all users have equal access to the item do we need roles in future users must provide exact name or email to find others in the system to prevent data leaks | 1 |
370,123 | 10,925,613,057 | IssuesEvent | 2019-11-22 12:57:59 | arrow-kt/arrow-meta | https://api.github.com/repos/arrow-kt/arrow-meta | opened | Improve the user experience with sidebar menu | high-priority web | There are a couple of issues in the sidebar menu of the Arrow meta site:
- Every time an item of the menu is selected, the whole menu is reloaded. We would have to avoid that effect if possible.
- When the user selects an item of the `Compiler API` section, the highlighted option in the menu is the item with the same name but in the `Intellij IDEA API` section. | 1.0 | Improve the user experience with sidebar menu - There are a couple of issues in the sidebar menu of the Arrow meta site:
- Every time an item of the menu is selected, the whole menu is reloaded. We would have to avoid that effect if possible.
- When the user selects an item of the `Compiler API` section, the highlighted option in the menu is the item with the same name but in the `Intellij IDEA API` section. | priority | improve the user experience with sidebar menu there are a couple of issues in the sidebar menu of the arrow meta site every time an item of the menu is selected the whole menu is reloaded we would have to avoid that effect if possible when the user selects an item of the compiler api section the highlighted option in the menu is the item with the same name but in the intellij idea api section | 1 |
138,865 | 5,348,889,472 | IssuesEvent | 2017-02-18 10:12:52 | open-serious/open-serious | https://api.github.com/repos/open-serious/open-serious | closed | Linker error: multiple definition of `_hwndMain` | os.linux priority.high type.bug | Currently we cannot build the linux version because of a linker error:
```
CMakeFiles/ssam.dir/SeriousSam/MainWindow.cpp.o:(.bss+0x8): multiple definition of `_hwndMain'
CMakeFiles/ssam.dir/Engine/Engine.cpp.o:(.bss+0x18): first defined here
collect2: error: ld returned 1 exit status
CMakeFiles/ssam.dir/build.make:5096: recipe for target 'ssam' failed
make[2]: *** [ssam] Error 1
``` | 1.0 | Linker error: multiple definition of `_hwndMain` - Currently we cannot build the linux version because of a linker error:
```
CMakeFiles/ssam.dir/SeriousSam/MainWindow.cpp.o:(.bss+0x8): multiple definition of `_hwndMain'
CMakeFiles/ssam.dir/Engine/Engine.cpp.o:(.bss+0x18): first defined here
collect2: error: ld returned 1 exit status
CMakeFiles/ssam.dir/build.make:5096: recipe for target 'ssam' failed
make[2]: *** [ssam] Error 1
``` | priority | linker error multiple definition of hwndmain currently we cannot build the linux version because of a linker error cmakefiles ssam dir serioussam mainwindow cpp o bss multiple definition of hwndmain cmakefiles ssam dir engine engine cpp o bss first defined here error ld returned exit status cmakefiles ssam dir build make recipe for target ssam failed make error | 1 |
169,115 | 6,395,244,718 | IssuesEvent | 2017-08-04 12:45:20 | vmware/admiral | https://api.github.com/repos/vmware/admiral | closed | OVA: UI hangs while adding Harbor to Admiral | priority/high | **Description:**
The UI hangs upon pressing the Save/Verify button when attempting to add Harbor as shown below. The clip below shows me waiting for about 15 seconds because of file upload size restrictions on Github, but it's been hung at that point for over a few minutes and doesn't seem to progress. If I press Cancel and go back to the UI, I don't see Harbor added.

**Container logs:**
```
[696][I][2017-08-03T16:57:52.367Z][698][SslCertificateResolver][connect][@@@ connectionCertificates is empty]
[697][I][2017-08-03T16:57:52.379Z][698][SslCertificateResolver$1][checkServerTrusted][@@@ checkServerTrusted: checking [[
<redacted>
]]
[698][I][2017-08-03T16:57:52.384Z][698][SslCertificateResolver$1][checkServerTrusted][@@@ certsTrusted = false ; adding certs]
[699][I][2017-08-03T16:57:52.384Z][698][SslCertificateResolver$1][checkServerTrusted][@@@ connectionCertificates = <redacted>]
Exception in thread "pool-8-thread-17" java.lang.SecurityException: class "org.bouncycastle.util.Encodable"'s signer information does not match signer information of other classes in the same package
at java.lang.ClassLoader.checkCerts(ClassLoader.java:898)
at java.lang.ClassLoader.preDefineClass(ClassLoader.java:668)
at java.lang.ClassLoader.defineClass(ClassLoader.java:761)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.bouncycastle.openssl.jcajce.JcaMiscPEMGenerator.convertObject(Unknown Source)
at org.bouncycastle.openssl.jcajce.JcaMiscPEMGenerator.<init>(Unknown Source)
at org.bouncycastle.openssl.PEMWriter.writeObject(Unknown Source)
at org.bouncycastle.openssl.PEMWriter.writeObject(Unknown Source)
at com.vmware.admiral.common.util.CertificateUtilExtended.certToPEMformat(CertificateUtilExtended.java:118)
at com.vmware.admiral.common.util.CertificateUtilExtended.toPEMformat(CertificateUtilExtended.java:66)
at com.vmware.admiral.common.util.CertificateUtilExtended.toPEMformat(CertificateUtilExtended.java:78)
at com.vmware.admiral.service.common.SslTrustImportService.createSslTrustCertificateState(SslTrustImportService.java:187)
at com.vmware.admiral.service.common.SslTrustImportService.lambda$handlePut$1(SslTrustImportService.java:111)
at com.vmware.admiral.common.util.SslCertificateResolver.lambda$execute$0(SslCertificateResolver.java:105)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
```
{"key":"__build.number","value":"66","documentVersion":0,"documentEpoch":0,"documentKind":"com:vmware:admiral:service:common:ConfigurationService:ConfigurationState","documentSelfLink":"/config/props/__build.number","documentUpdateTimeMicros":1501774860910003,"documentUpdateAction":"POST","documentExpirationTimeMicros":0,"documentOwner":"ca4b3d58-b817-4729-91b9-a6327b70489c","documentAuthPrincipalLink":"/core/authz/system-user"}
``` | 1.0 | OVA: UI hangs while adding Harbor to Admiral - **Description:**
The UI hangs upon pressing the Save/Verify button when attempting to add Harbor as shown below. The clip below shows me waiting for about 15 seconds because of file upload size restrictions on Github, but it's been hung at that point for over a few minutes and doesn't seem to progress. If I press Cancel and go back to the UI, I don't see Harbor added.

**Container logs:**
```
[696][I][2017-08-03T16:57:52.367Z][698][SslCertificateResolver][connect][@@@ connectionCertificates is empty]
[697][I][2017-08-03T16:57:52.379Z][698][SslCertificateResolver$1][checkServerTrusted][@@@ checkServerTrusted: checking [[
<redacted>
]]
[698][I][2017-08-03T16:57:52.384Z][698][SslCertificateResolver$1][checkServerTrusted][@@@ certsTrusted = false ; adding certs]
[699][I][2017-08-03T16:57:52.384Z][698][SslCertificateResolver$1][checkServerTrusted][@@@ connectionCertificates = <redacted>]
Exception in thread "pool-8-thread-17" java.lang.SecurityException: class "org.bouncycastle.util.Encodable"'s signer information does not match signer information of other classes in the same package
at java.lang.ClassLoader.checkCerts(ClassLoader.java:898)
at java.lang.ClassLoader.preDefineClass(ClassLoader.java:668)
at java.lang.ClassLoader.defineClass(ClassLoader.java:761)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.bouncycastle.openssl.jcajce.JcaMiscPEMGenerator.convertObject(Unknown Source)
at org.bouncycastle.openssl.jcajce.JcaMiscPEMGenerator.<init>(Unknown Source)
at org.bouncycastle.openssl.PEMWriter.writeObject(Unknown Source)
at org.bouncycastle.openssl.PEMWriter.writeObject(Unknown Source)
at com.vmware.admiral.common.util.CertificateUtilExtended.certToPEMformat(CertificateUtilExtended.java:118)
at com.vmware.admiral.common.util.CertificateUtilExtended.toPEMformat(CertificateUtilExtended.java:66)
at com.vmware.admiral.common.util.CertificateUtilExtended.toPEMformat(CertificateUtilExtended.java:78)
at com.vmware.admiral.service.common.SslTrustImportService.createSslTrustCertificateState(SslTrustImportService.java:187)
at com.vmware.admiral.service.common.SslTrustImportService.lambda$handlePut$1(SslTrustImportService.java:111)
at com.vmware.admiral.common.util.SslCertificateResolver.lambda$execute$0(SslCertificateResolver.java:105)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
```
{"key":"__build.number","value":"66","documentVersion":0,"documentEpoch":0,"documentKind":"com:vmware:admiral:service:common:ConfigurationService:ConfigurationState","documentSelfLink":"/config/props/__build.number","documentUpdateTimeMicros":1501774860910003,"documentUpdateAction":"POST","documentExpirationTimeMicros":0,"documentOwner":"ca4b3d58-b817-4729-91b9-a6327b70489c","documentAuthPrincipalLink":"/core/authz/system-user"}
``` | priority | ova ui hangs while adding harbor to admiral description the ui hangs upon pressing the save verify button when attempting to add harbor as shown below the clip below shows me waiting for about seconds because of file upload size restrictions on github but it s been hung at that point for over a few minutes and doesn t seem to progress if i press cancel and go back to the ui i don t see harbor added container logs checkservertrusted checking exception in thread pool thread java lang securityexception class org bouncycastle util encodable s signer information does not match signer information of other classes in the same package at java lang classloader checkcerts classloader java at java lang classloader predefineclass classloader java at java lang classloader defineclass classloader java at java security secureclassloader defineclass secureclassloader java at java net urlclassloader defineclass urlclassloader java at java net urlclassloader access urlclassloader java at java net urlclassloader run urlclassloader java at java net urlclassloader run urlclassloader java at java security accesscontroller doprivileged native method at java net urlclassloader findclass urlclassloader java at java lang classloader loadclass classloader java at sun misc launcher appclassloader loadclass launcher java at java lang classloader loadclass classloader java at java lang classloader native method at java lang classloader defineclass classloader java at java security secureclassloader defineclass secureclassloader java at java net urlclassloader defineclass urlclassloader java at java net urlclassloader access urlclassloader java at java net urlclassloader run urlclassloader java at java net urlclassloader run urlclassloader java at java security accesscontroller doprivileged native method at java net urlclassloader findclass urlclassloader java at java lang classloader loadclass classloader java at sun misc launcher appclassloader loadclass launcher java at java lang classloader loadclass classloader java at java lang classloader native method at java lang classloader defineclass classloader java at java security secureclassloader defineclass secureclassloader java at java net urlclassloader defineclass urlclassloader java at java net urlclassloader access urlclassloader java at java net urlclassloader run urlclassloader java at java net urlclassloader run urlclassloader java at java security accesscontroller doprivileged native method at java net urlclassloader findclass urlclassloader java at java lang classloader loadclass classloader java at sun misc launcher appclassloader loadclass launcher java at java lang classloader loadclass classloader java at org bouncycastle openssl jcajce jcamiscpemgenerator convertobject unknown source at org bouncycastle openssl jcajce jcamiscpemgenerator unknown source at org bouncycastle openssl pemwriter writeobject unknown source at org bouncycastle openssl pemwriter writeobject unknown source at com vmware admiral common util certificateutilextended certtopemformat certificateutilextended java at com vmware admiral common util certificateutilextended topemformat certificateutilextended java at com vmware admiral common util certificateutilextended topemformat certificateutilextended java at com vmware admiral service common ssltrustimportservice createssltrustcertificatestate ssltrustimportservice java at com vmware admiral service common ssltrustimportservice lambda handleput ssltrustimportservice java at com vmware admiral common util sslcertificateresolver lambda execute sslcertificateresolver java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java key build number value documentversion documentepoch documentkind com vmware admiral service common configurationservice configurationstate documentselflink config props build number documentupdatetimemicros documentupdateaction post documentexpirationtimemicros documentowner documentauthprincipallink core authz system user | 1 |
525,533 | 15,255,640,034 | IssuesEvent | 2021-02-20 16:53:49 | lorenzwalthert/precommit | https://api.github.com/repos/lorenzwalthert/precommit | closed | Handling dependencies | Complexity: High Priority: Low Status: Postponed | As long as R is not a supported language, I wonder how we can install R dependencies like R packages such as styler. They probably need to be installed manually. This is not really nice and hence R support in pre-commit would be nice anyways. But since R cannot provide an executable (see https://github.com/r-lib/styler/issues/467#issuecomment-491553529), it would be merely for managing dependencies and then we'd anyways use the system entry point with an R script as with the current implementation. Also see https://github.com/pre-commit/pre-commit/issues/926. cc: @katrinleinweber. | 1.0 | Handling dependencies - As long as R is not a supported language, I wonder how we can install R dependencies like R packages such as styler. They probably need to be installed manually. This is not really nice and hence R support in pre-commit would be nice anyways. But since R cannot provide an executable (see https://github.com/r-lib/styler/issues/467#issuecomment-491553529), it would be merely for managing dependencies and then we'd anyways use the system entry point with an R script as with the current implementation. Also see https://github.com/pre-commit/pre-commit/issues/926. cc: @katrinleinweber. | priority | handling dependencies as long as r is not a supported language i wonder how we can install r dependencies like r packages such as styler they probably need to be installed manually this is not really nice and hence r support in pre commit would be nice anyways but since r cannot provide an executable see it would be merely for managing dependencies and then we d anyways use the system entry point with an r script as with the current implementation also see cc katrinleinweber | 1 |
503,724 | 14,597,199,476 | IssuesEvent | 2020-12-20 19:04:40 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.chaturbate.com - video or audio doesn't play | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-important | <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63985 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://m.chaturbate.com/casspertheghxst/
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
Can't play the live streaming
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/e7265666-9b80-40fe-bc94-a2ed21d09163.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201217185930</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/c9cb467e-4671-4579-bda9-72f43e9c8f1e)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | m.chaturbate.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63985 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://m.chaturbate.com/casspertheghxst/
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
Can't play the live streaming
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/e7265666-9b80-40fe-bc94-a2ed21d09163.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201217185930</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/c9cb467e-4671-4579-bda9-72f43e9c8f1e)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | m chaturbate com video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes other problem type video or audio doesn t play description the video or audio does not play steps to reproduce can t play the live streaming view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
193,784 | 6,888,106,879 | IssuesEvent | 2017-11-22 03:32:55 | rnleach/sonde | https://api.github.com/repos/rnleach/sonde | closed | Use tags instead of CSS for text view styling. | bug High Priority | CSS is not consistent across platforms/versions of GTK+, it doesn't work on windows. | 1.0 | Use tags instead of CSS for text view styling. - CSS is not consistent across platforms/versions of GTK+, it doesn't work on windows. | priority | use tags instead of css for text view styling css is not consistent across platforms versions of gtk it doesn t work on windows | 1 |
479,838 | 13,806,227,226 | IssuesEvent | 2020-10-11 16:49:29 | ricardobalk/go-osmand-tracker | https://api.github.com/repos/ricardobalk/go-osmand-tracker | closed | Fix panic on empty database | bug high priority | When starting the server for the very first time, without locations added to a database, the application panics:
```
2020/10/11 18:23:41 [Recovery] 2020/10/11 - 18:23:41 panic recovered:
runtime error: index out of range [0] with length 0
/usr/lib/go-1.14/src/runtime/panic.go:88 (0x432732)
goPanicIndex: panic(boundsError{x: int64(x), signed: true, y: y, code: boundsIndex})
/home/undisclosed/Local/Git/GitHub/ricardobalk/go-osmand-tracker/internal/server/server.go:132 (0x938e2a)
Listen.func3: lastDatabaseAddition[0] = entry
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/context.go:161 (0x922b6a)
(*Context).Next: c.handlers[c.index](c)
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/recovery.go:83 (0x935f3f)
RecoveryWithWriter.func1: c.Next()
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/context.go:161 (0x922b6a)
(*Context).Next: c.handlers[c.index](c)
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/gin.go:409 (0x92c615)
(*Engine).handleHTTPRequest: c.Next()
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/gin.go:367 (0x92bd2c)
(*Engine).ServeHTTP: engine.handleHTTPRequest(c)
/usr/lib/go-1.14/src/net/http/server.go:2807 (0x7495d2)
serverHandler.ServeHTTP: handler.ServeHTTP(rw, req)
/usr/lib/go-1.14/src/net/http/server.go:1895 (0x744f4b)
(*conn).serve: serverHandler{c.server}.ServeHTTP(w, w.req)
/usr/lib/go-1.14/src/runtime/asm_amd64.s:1373 (0x4642d0)
goexit: BYTE $0x90 // NOP
```
It is caused by https://github.com/ricardobalk/go-osmand-tracker/blob/bb156cdc8e7977ae6e79fcefe4a57b94486b03e4/internal/server/server.go#L132
because the `lastDatabaseAddition` array contains no keys when starting the app for the first time - so it is impossible to place something to the non-existing keyindex 0.
**Who is affected by this bug?**
- Anyone starting the app for the first time, trying to put the first location into the database via `/submit` | 1.0 | Fix panic on empty database - When starting the server for the very first time, without locations added to a database, the application panics:
```
2020/10/11 18:23:41 [Recovery] 2020/10/11 - 18:23:41 panic recovered:
runtime error: index out of range [0] with length 0
/usr/lib/go-1.14/src/runtime/panic.go:88 (0x432732)
goPanicIndex: panic(boundsError{x: int64(x), signed: true, y: y, code: boundsIndex})
/home/undisclosed/Local/Git/GitHub/ricardobalk/go-osmand-tracker/internal/server/server.go:132 (0x938e2a)
Listen.func3: lastDatabaseAddition[0] = entry
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/context.go:161 (0x922b6a)
(*Context).Next: c.handlers[c.index](c)
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/recovery.go:83 (0x935f3f)
RecoveryWithWriter.func1: c.Next()
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/context.go:161 (0x922b6a)
(*Context).Next: c.handlers[c.index](c)
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/gin.go:409 (0x92c615)
(*Engine).handleHTTPRequest: c.Next()
/home/undisclosed/.golang/pkg/mod/github.com/gin-gonic/gin@v1.6.3/gin.go:367 (0x92bd2c)
(*Engine).ServeHTTP: engine.handleHTTPRequest(c)
/usr/lib/go-1.14/src/net/http/server.go:2807 (0x7495d2)
serverHandler.ServeHTTP: handler.ServeHTTP(rw, req)
/usr/lib/go-1.14/src/net/http/server.go:1895 (0x744f4b)
(*conn).serve: serverHandler{c.server}.ServeHTTP(w, w.req)
/usr/lib/go-1.14/src/runtime/asm_amd64.s:1373 (0x4642d0)
goexit: BYTE $0x90 // NOP
```
It is caused by https://github.com/ricardobalk/go-osmand-tracker/blob/bb156cdc8e7977ae6e79fcefe4a57b94486b03e4/internal/server/server.go#L132
because the `lastDatabaseAddition` array contains no keys when starting the app for the first time - so it is impossible to place something to the non-existing keyindex 0.
**Who is affected by this bug?**
- Anyone starting the app for the first time, trying to put the first location into the database via `/submit` | priority | fix panic on empty database when starting the server for the very first time without locations added to a database the application panics panic recovered runtime error index out of range with length usr lib go src runtime panic go gopanicindex panic boundserror x x signed true y y code boundsindex home undisclosed local git github ricardobalk go osmand tracker internal server server go listen lastdatabaseaddition entry home undisclosed golang pkg mod github com gin gonic gin context go context next c handlers c home undisclosed golang pkg mod github com gin gonic gin recovery go recoverywithwriter c next home undisclosed golang pkg mod github com gin gonic gin context go context next c handlers c home undisclosed golang pkg mod github com gin gonic gin gin go engine handlehttprequest c next home undisclosed golang pkg mod github com gin gonic gin gin go engine servehttp engine handlehttprequest c usr lib go src net http server go serverhandler servehttp handler servehttp rw req usr lib go src net http server go conn serve serverhandler c server servehttp w w req usr lib go src runtime asm s goexit byte nop it is caused by because the lastdatabaseaddition array contains no keys when starting the app for the first time so it is impossible to place something to the non existing keyindex who is affected by this bug anyone starting the app for the first time trying to put the first location into the database via submit | 1 |
744,243 | 25,935,295,712 | IssuesEvent | 2022-12-16 13:38:42 | SlimeVR/SlimeVR-Server | https://api.github.com/repos/SlimeVR/SlimeVR-Server | reopened | Implement ways to reset tracker without having to open Server window. | Type: Feature Request Area: Application Protocol Priority: High | Either through tapping the tracker, controller button press and gesture, or any other potential idea.
This will be a huge QOL upgrade to people who moves a lot, having the need to reset tracker in multiple succession for experimenting tracker adjustments, or unlucky enough to just have a lot of drift happening to them, worth to be in the issue here as a constant reminder.
Update 1: Tap function for BNO is too inconsistent, this has be done with gesture/button press or both. | 1.0 | Implement ways to reset tracker without having to open Server window. - Either through tapping the tracker, controller button press and gesture, or any other potential idea.
This will be a huge QOL upgrade to people who moves a lot, having the need to reset tracker in multiple succession for experimenting tracker adjustments, or unlucky enough to just have a lot of drift happening to them, worth to be in the issue here as a constant reminder.
Update 1: Tap function for BNO is too inconsistent, this has be done with gesture/button press or both. | priority | implement ways to reset tracker without having to open server window either through tapping the tracker controller button press and gesture or any other potential idea this will be a huge qol upgrade to people who moves a lot having the need to reset tracker in multiple succession for experimenting tracker adjustments or unlucky enough to just have a lot of drift happening to them worth to be in the issue here as a constant reminder update tap function for bno is too inconsistent this has be done with gesture button press or both | 1 |
602,483 | 18,470,103,588 | IssuesEvent | 2021-10-17 15:39:19 | pnxenopoulos/csgo | https://api.github.com/repos/pnxenopoulos/csgo | opened | Add cleaning functions | Feature Request High Priority | Since the update to parse all demos, then handle cleaning locally, we need to build cleaning functions. These can be a few things to consider:
- warmup rounds
- knife rounds
- too short or too long rounds
- rounds with non-standard end reasons
- rounds with no score changes
- rounds outside of the bounds of the major events
These functions should likely be built into the `DemoParser` class. | 1.0 | Add cleaning functions - Since the update to parse all demos, then handle cleaning locally, we need to build cleaning functions. These can be a few things to consider:
- warmup rounds
- knife rounds
- too short or too long rounds
- rounds with non-standard end reasons
- rounds with no score changes
- rounds outside of the bounds of the major events
These functions should likely be built into the `DemoParser` class. | priority | add cleaning functions since the update to parse all demos then handle cleaning locally we need to build cleaning functions these can be a few things to consider warmup rounds knife rounds too short or too long rounds rounds with non standard end reasons rounds with no score changes rounds outside of the bounds of the major events these functions should likely be built into the demoparser class | 1 |
606,521 | 18,763,900,110 | IssuesEvent | 2021-11-05 20:10:40 | nlehnert1/loopy | https://api.github.com/repos/nlehnert1/loopy | opened | Create a toolbar icon for the templates button | enhancement Priority: High | We'll need an icon for the templates tool. I was thinking we could just draw a simple feedback loop and have that be the icon, but I'm definitely open to other suggestions.
The icon should:
- [ ] Be the a similar style to the other icons
- [ ] Be the same size as the other icons
- [ ] Have a transparent background (I assume)
- [ ] Get the idea across that you're about to open a list of templates to choose from, if you opt not to take the "simple feedback loop" icon idea | 1.0 | Create a toolbar icon for the templates button - We'll need an icon for the templates tool. I was thinking we could just draw a simple feedback loop and have that be the icon, but I'm definitely open to other suggestions.
The icon should:
- [ ] Be the a similar style to the other icons
- [ ] Be the same size as the other icons
- [ ] Have a transparent background (I assume)
- [ ] Get the idea across that you're about to open a list of templates to choose from, if you opt not to take the "simple feedback loop" icon idea | priority | create a toolbar icon for the templates button we ll need an icon for the templates tool i was thinking we could just draw a simple feedback loop and have that be the icon but i m definitely open to other suggestions the icon should be the a similar style to the other icons be the same size as the other icons have a transparent background i assume get the idea across that you re about to open a list of templates to choose from if you opt not to take the simple feedback loop icon idea | 1 |
326,600 | 9,958,295,043 | IssuesEvent | 2019-07-05 20:27:05 | aragon/aragon-cli | https://api.github.com/repos/aragon/aragon-cli | closed | Output publish information to decide before publishing | cmd: apm publish 🦅 flock/nest high priority | ## 🚀 Feature
Update the `aragon apm publish` command to output the information that is going to be published instead of fetching the repo after publish.
## Pitch
In this way is going to be possible to decide to publish or not.
| 1.0 | Output publish information to decide before publishing - ## 🚀 Feature
Update the `aragon apm publish` command to output the information that is going to be published instead of fetching the repo after publish.
## Pitch
In this way is going to be possible to decide to publish or not.
| priority | output publish information to decide before publishing 🚀 feature update the aragon apm publish command to output the information that is going to be published instead of fetching the repo after publish pitch in this way is going to be possible to decide to publish or not | 1 |
583,403 | 17,384,783,141 | IssuesEvent | 2021-08-01 12:02:26 | everyday-as/gmodstore-issues | https://api.github.com/repos/everyday-as/gmodstore-issues | closed | Send email about ToS changes and make people set up wallet payouts | High Priority | Do this shortly after launch of 6.0 | 1.0 | Send email about ToS changes and make people set up wallet payouts - Do this shortly after launch of 6.0 | priority | send email about tos changes and make people set up wallet payouts do this shortly after launch of | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.