Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
388,094 | 11,474,466,090 | IssuesEvent | 2020-02-10 04:24:48 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | [DOC] Account Recovery REST API page is not available | Affected/5.7.0 Complexity/Low Priority/High Severity/Blocker Status/In Progress Type/Docs | Document version - IS 5.7.0
'Using the Account Recovery REST APIs' link in the document [1] is not opening the page.
[1] https://docs.wso2.com/display/IS570/REST+APIs | 1.0 | [DOC] Account Recovery REST API page is not available - Document version - IS 5.7.0
'Using the Account Recovery REST APIs' link in the document [1] is not opening the page.
[1] https://docs.wso2.com/display/IS570/REST+APIs | priority | account recovery rest api page is not available document version is using the account recovery rest apis link in the document is not opening the page | 1 |
5,504 | 2,576,979,809 | IssuesEvent | 2015-02-12 14:25:07 | phusion/passenger | https://api.github.com/repos/phusion/passenger | closed | Vendor daemon_controller and crash-watch | Bounty/Easy Priority/High | We should vendor daemon_controller and crash-watch. That makes packaging much easier. | 1.0 | Vendor daemon_controller and crash-watch - We should vendor daemon_controller and crash-watch. That makes packaging much easier. | priority | vendor daemon controller and crash watch we should vendor daemon controller and crash watch that makes packaging much easier | 1 |
421,987 | 12,264,625,110 | IssuesEvent | 2020-05-07 04:58:25 | dedis/cothority | https://api.github.com/repos/dedis/cothority | closed | View change: not enough proofs | bug high priority | Sometimes the view change gets stuck with:
```
E 19/29/10 15:23:12.138834793: viewchange.go:336 (byzcoin.(*Service).verifyViewChange) - tls://fairywren.ch:7770 not enough proofs: %v <= %v 2 2
``` | 1.0 | View change: not enough proofs - Sometimes the view change gets stuck with:
```
E 19/29/10 15:23:12.138834793: viewchange.go:336 (byzcoin.(*Service).verifyViewChange) - tls://fairywren.ch:7770 not enough proofs: %v <= %v 2 2
``` | priority | view change not enough proofs sometimes the view change gets stuck with e viewchange go byzcoin service verifyviewchange tls fairywren ch not enough proofs v v | 1 |
532,992 | 15,574,885,060 | IssuesEvent | 2021-03-17 10:22:57 | KusinVitamin/Projekt-Hemsida | https://api.github.com/repos/KusinVitamin/Projekt-Hemsida | closed | Feedback efter test | Needs more info Priority: High | # User story
As a ..., I want to ..., so I can ...
*Ideally, this is in the issue title, but if not, you can put it here. If so, delete this section.*
# Acceptance criteria
- [ ] This is something that can be verified to show that this user story is satisfied.
# Sprint Ready Checklist
1. - [ ] Acceptance criteria defined
2. - [ ] Team understands acceptance criteria
3. - [ ] Team has defined solution / steps to satisfy acceptance criteria
4. - [ ] Acceptance criteria is verifiable / testable
5. - [ ] External / 3rd Party dependencies identified
| 1.0 | Feedback efter test - # User story
As a ..., I want to ..., so I can ...
*Ideally, this is in the issue title, but if not, you can put it here. If so, delete this section.*
# Acceptance criteria
- [ ] This is something that can be verified to show that this user story is satisfied.
# Sprint Ready Checklist
1. - [ ] Acceptance criteria defined
2. - [ ] Team understands acceptance criteria
3. - [ ] Team has defined solution / steps to satisfy acceptance criteria
4. - [ ] Acceptance criteria is verifiable / testable
5. - [ ] External / 3rd Party dependencies identified
| priority | feedback efter test user story as a i want to so i can ideally this is in the issue title but if not you can put it here if so delete this section acceptance criteria this is something that can be verified to show that this user story is satisfied sprint ready checklist acceptance criteria defined team understands acceptance criteria team has defined solution steps to satisfy acceptance criteria acceptance criteria is verifiable testable external party dependencies identified | 1 |
304,361 | 9,331,137,320 | IssuesEvent | 2019-03-28 09:03:42 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Claims not received in id token in response, or in id token for the code, returned via code id_token OIDC hybrid flow | Affected/5.8.0-Alpha2 Complexity/Medium Component/OAuth Priority/High Severity/Critical Type/Bug | **Environment:**
DB: DB2
JDK: Java 8
**Steps to reproduce:**
1. Create a tenant
2. Create a user in that tenant
3. Update first name -> http://wso2.org/claims/givenname, last name -> http://wso2.org/claims/lastname, email -> http://wso2.org/claims/emailaddress claims of that user
4. Create an oauth application
5. Configure claims section as below.

6. Make sure above claims are available for OIDC scopes as below.
given_name (mapped to http://wso2.org/claims/givenname) in openid scope
email (mapped to http://wso2.org/claims/lastname) only in email scope
family_name (mapped to http://wso2.org/claims/lastname) only in profile scope
gender (mapped to http://wso2.org/claims/gender)in openid scope
7. Initiate a request with OIDC Hybrid flow for response type **code id_token** only for openid scope
https://localhost:9443/oauth2/authorize?response_type=code id_token&client_id=xxx&nonce=asd&redirect_uri=http://localhost:8080/playground2/oauth2client&scope=openid
8. Note that only sub claim is returned in id token. Both given-name and sub expected.
9. Now initiate a token request for the code received. It returns only sub in id token. It should also return given-name
10. Perform step 7, 8 and 9 to openid email, openid email profile scope and note non responds with claims in requested scopes | 1.0 | Claims not received in id token in response, or in id token for the code, returned via code id_token OIDC hybrid flow - **Environment:**
DB: DB2
JDK: Java 8
**Steps to reproduce:**
1. Create a tenant
2. Create a user in that tenant
3. Update first name -> http://wso2.org/claims/givenname, last name -> http://wso2.org/claims/lastname, email -> http://wso2.org/claims/emailaddress claims of that user
4. Create an oauth application
5. Configure claims section as below.

6. Make sure above claims are available for OIDC scopes as below.
given_name (mapped to http://wso2.org/claims/givenname) in openid scope
email (mapped to http://wso2.org/claims/lastname) only in email scope
family_name (mapped to http://wso2.org/claims/lastname) only in profile scope
gender (mapped to http://wso2.org/claims/gender)in openid scope
7. Initiate a request with OIDC Hybrid flow for response type **code id_token** only for openid scope
https://localhost:9443/oauth2/authorize?response_type=code id_token&client_id=xxx&nonce=asd&redirect_uri=http://localhost:8080/playground2/oauth2client&scope=openid
8. Note that only sub claim is returned in id token. Both given-name and sub expected.
9. Now initiate a token request for the code received. It returns only sub in id token. It should also return given-name
10. Perform step 7, 8 and 9 to openid email, openid email profile scope and note non responds with claims in requested scopes | priority | claims not received in id token in response or in id token for the code returned via code id token oidc hybrid flow environment db jdk java steps to reproduce create a tenant create a user in that tenant update first name last name email claims of that user create an oauth application configure claims section as below make sure above claims are available for oidc scopes as below given name mapped to in openid scope email mapped to only in email scope family name mapped to only in profile scope gender mapped to openid scope initiate a request with oidc hybrid flow for response type code id token only for openid scope id token client id xxx nonce asd redirect uri note that only sub claim is returned in id token both given name and sub expected now initiate a token request for the code received it returns only sub in id token it should also return given name perform step and to openid email openid email profile scope and note non responds with claims in requested scopes | 1 |
526,942 | 15,305,334,570 | IssuesEvent | 2021-02-24 17:59:23 | netlify/explorers | https://api.github.com/repos/netlify/explorers | closed | Sends `mission-complete` data to activity table | gamification mscw: must priority: high to pair on type: dev | Picking up from #471 / #468
- use [missionComplete work in modalCongrats in stage.js](https://github.com/netlify/explorers/blob/58efe9fcd72d600639f1d9caf72c32932bee0272/src/pages/learn/%5Bmission%5D/%5Bstage%5D.js#L150) as a reference to send mission complete to activity table | 1.0 | Sends `mission-complete` data to activity table - Picking up from #471 / #468
- use [missionComplete work in modalCongrats in stage.js](https://github.com/netlify/explorers/blob/58efe9fcd72d600639f1d9caf72c32932bee0272/src/pages/learn/%5Bmission%5D/%5Bstage%5D.js#L150) as a reference to send mission complete to activity table | priority | sends mission complete data to activity table picking up from use as a reference to send mission complete to activity table | 1 |
775,158 | 27,221,264,834 | IssuesEvent | 2023-02-21 05:39:49 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Handle keywords used as identifiers such as func-names | Type/Improvement Priority/High Team/CompilerFE Area/Parser | **Description:**
$subject.
Examples are `start()`, `schedular.start()`, `client->continue()`, `x.map()`
| 1.0 | Handle keywords used as identifiers such as func-names - **Description:**
$subject.
Examples are `start()`, `schedular.start()`, `client->continue()`, `x.map()`
| priority | handle keywords used as identifiers such as func names description subject examples are start schedular start client continue x map | 1 |
275,993 | 8,583,025,741 | IssuesEvent | 2018-11-13 18:34:22 | Qiskit/qiskit-terra | https://api.github.com/repos/Qiskit/qiskit-terra | closed | The Qobj concept has gotten overly complicated. | priority: high type: discussion | I feel the qobj has become larger than what it was intended to be. I want it to be a simple serialization of a list of circuits (an object that is dump that is sent to the api and decoded after). I don't care if it is JSON, binary or email :-)
Basically, it should not exists until we run
```
dags_2_qobj (or circuits_2_qobj as in #1083)
```
and then I don't mind what it is but it does not include results or a results object. It is run on a backend
using
```
job = backend.run(qobj)
```
and then it no longer is used and its flow has ended. The Result object (which is a different folder or module) is made by
```
result = job.result()
```
So my questions are why do we have the _results in the qobj folder. It should not and if we want to have an internal object that handles what is returned by the backend then it lives in the Result folder and used by the result object. I don't want to think of qobj as the new object that handles the API. It is only the input.
---
I see that we have functions for converting to the old version. Why do we have these? I see the need in the future when we qobj v1 to qobj v2 we should have a conversion, but why can't we, for now, have these as part of the run method in the backend. ie hidden from qiskit terra in the ibm_provider
---
other things with qobj
1. validate against schema
```python
qobj_schema = json.load("schemas/qobj_schema.json") # qiskit defines this
backend_qobj_schema = backend.schema() # backend defines this. qiskit gets it via API call
jsonschema.validate(qobj_to_json(qobj), qobj_schema)
jsonschema.validate(qobj_to_json(qobj), backend_qobj_schema)
```
2. convert to circuits
```python
circuits = qobj_to_circuits(qobj)
```
3. convert between versions. Say we make the version 2 and the backend is version 1 where does this happen. I feel this should happen in the dags_2_qobj as qobj is lossy and the backend just uses the schema to check the version it supports. If we need to have some utility functions like qobj1_to_qobj2 etc for versions that we can update then we can have them in tools.
4. run smart validation see #1057 | 1.0 | The Qobj concept has gotten overly complicated. - I feel the qobj has become larger than what it was intended to be. I want it to be a simple serialization of a list of circuits (an object that is dump that is sent to the api and decoded after). I don't care if it is JSON, binary or email :-)
Basically, it should not exists until we run
```
dags_2_qobj (or circuits_2_qobj as in #1083)
```
and then I don't mind what it is but it does not include results or a results object. It is run on a backend
using
```
job = backend.run(qobj)
```
and then it no longer is used and its flow has ended. The Result object (which is a different folder or module) is made by
```
result = job.result()
```
So my questions are why do we have the _results in the qobj folder. It should not and if we want to have an internal object that handles what is returned by the backend then it lives in the Result folder and used by the result object. I don't want to think of qobj as the new object that handles the API. It is only the input.
---
I see that we have functions for converting to the old version. Why do we have these? I see the need in the future when we qobj v1 to qobj v2 we should have a conversion, but why can't we, for now, have these as part of the run method in the backend. ie hidden from qiskit terra in the ibm_provider
---
other things with qobj
1. validate against schema
```python
qobj_schema = json.load("schemas/qobj_schema.json") # qiskit defines this
backend_qobj_schema = backend.schema() # backend defines this. qiskit gets it via API call
jsonschema.validate(qobj_to_json(qobj), qobj_schema)
jsonschema.validate(qobj_to_json(qobj), backend_qobj_schema)
```
2. convert to circuits
```python
circuits = qobj_to_circuits(qobj)
```
3. convert between versions. Say we make the version 2 and the backend is version 1 where does this happen. I feel this should happen in the dags_2_qobj as qobj is lossy and the backend just uses the schema to check the version it supports. If we need to have some utility functions like qobj1_to_qobj2 etc for versions that we can update then we can have them in tools.
4. run smart validation see #1057 | priority | the qobj concept has gotten overly complicated i feel the qobj has become larger than what it was intended to be i want it to be a simple serialization of a list of circuits an object that is dump that is sent to the api and decoded after i don t care if it is json binary or email basically it should not exists until we run dags qobj or circuits qobj as in and then i don t mind what it is but it does not include results or a results object it is run on a backend using job backend run qobj and then it no longer is used and its flow has ended the result object which is a different folder or module is made by result job result so my questions are why do we have the results in the qobj folder it should not and if we want to have an internal object that handles what is returned by the backend then it lives in the result folder and used by the result object i don t want to think of qobj as the new object that handles the api it is only the input i see that we have functions for converting to the old version why do we have these i see the need in the future when we qobj to qobj we should have a conversion but why can t we for now have these as part of the run method in the backend ie hidden from qiskit terra in the ibm provider other things with qobj validate against schema python qobj schema json load schemas qobj schema json qiskit defines this backend qobj schema backend schema backend defines this qiskit gets it via api call jsonschema validate qobj to json qobj qobj schema jsonschema validate qobj to json qobj backend qobj schema convert to circuits python circuits qobj to circuits qobj convert between versions say we make the version and the backend is version where does this happen i feel this should happen in the dags qobj as qobj is lossy and the backend just uses the schema to check the version it supports if we need to have some utility functions like to etc for versions that we can update then we can have them in tools run smart validation see | 1 |
466,757 | 13,433,176,817 | IssuesEvent | 2020-09-07 09:27:27 | chubaofs/chubaofs | https://api.github.com/repos/chubaofs/chubaofs | closed | Refactor and enhance node offline process for DataNode and Metanode. | enhancement priority/high | The current offline logic for DataNode and MetaNode relies on Raft algorithm and logic, which introduces the following problems:
1. When rejoining the partition that was removed, if Raft has a low number of logs, Raft will copy the logs and replay the logs to restore the partition data on the new node. The logs from which the partition was removed will be applied, causing problems with changes to the partition's replica members.
2. When the partition copy group has no leader, the partition copy members cannot be adjusted and the node cannot be taken offline.
It is necessary to adjust the offline logic of DataNode and MetaNode. When processing the logic of adjusting the members of the partition copy, it does not go through or rely on Raft related mechanisms. These adjustments may require proper reconstruction of the Raft module used by ChubaoFS. | 1.0 | Refactor and enhance node offline process for DataNode and Metanode. - The current offline logic for DataNode and MetaNode relies on Raft algorithm and logic, which introduces the following problems:
1. When rejoining the partition that was removed, if Raft has a low number of logs, Raft will copy the logs and replay the logs to restore the partition data on the new node. The logs from which the partition was removed will be applied, causing problems with changes to the partition's replica members.
2. When the partition copy group has no leader, the partition copy members cannot be adjusted and the node cannot be taken offline.
It is necessary to adjust the offline logic of DataNode and MetaNode. When processing the logic of adjusting the members of the partition copy, it does not go through or rely on Raft related mechanisms. These adjustments may require proper reconstruction of the Raft module used by ChubaoFS. | priority | refactor and enhance node offline process for datanode and metanode the current offline logic for datanode and metanode relies on raft algorithm and logic which introduces the following problems when rejoining the partition that was removed if raft has a low number of logs raft will copy the logs and replay the logs to restore the partition data on the new node the logs from which the partition was removed will be applied causing problems with changes to the partition s replica members when the partition copy group has no leader the partition copy members cannot be adjusted and the node cannot be taken offline it is necessary to adjust the offline logic of datanode and metanode when processing the logic of adjusting the members of the partition copy it does not go through or rely on raft related mechanisms these adjustments may require proper reconstruction of the raft module used by chubaofs | 1 |
444,084 | 12,806,167,768 | IssuesEvent | 2020-07-03 08:56:52 | sosy-lab/benchexec | https://api.github.com/repos/sosy-lab/benchexec | closed | Show expected verdict in HTML table | HTML table enhancement high priority | When expected verdicts where encoded in the file name, they were visible in all tables automatically. Now this is no longer the case, so we should show them explicitly (at least optionally).
Possibilities include:
- an extra column between task name and run results (where we also show the specification if necessary)
- coloring the task name and/or specification according to the expected result (could be confused with correct/wrong, though)
- some other visual indicator like an icon (would need less space than a text column) | 1.0 | Show expected verdict in HTML table - When expected verdicts where encoded in the file name, they were visible in all tables automatically. Now this is no longer the case, so we should show them explicitly (at least optionally).
Possibilities include:
- an extra column between task name and run results (where we also show the specification if necessary)
- coloring the task name and/or specification according to the expected result (could be confused with correct/wrong, though)
- some other visual indicator like an icon (would need less space than a text column) | priority | show expected verdict in html table when expected verdicts where encoded in the file name they were visible in all tables automatically now this is no longer the case so we should show them explicitly at least optionally possibilities include an extra column between task name and run results where we also show the specification if necessary coloring the task name and or specification according to the expected result could be confused with correct wrong though some other visual indicator like an icon would need less space than a text column | 1 |
114,152 | 4,614,848,144 | IssuesEvent | 2016-09-25 20:07:59 | c0gent/NeverClicker | https://api.github.com/repos/c0gent/NeverClicker | opened | Mouse clicking outside of client window | bug priority high | Copy/Pasted from @zeusome's https://github.com/c0gent/NeverClicker/issues/11:
>Issue: For some reason, I had an issue where Neverclicker went out of Neverwinter Window and the mouse cursor was moving around/clicking things on my desktop. I returned to PC and several desktop apps had been opened. Will monitor this problem and provide more info when available. | 1.0 | Mouse clicking outside of client window - Copy/Pasted from @zeusome's https://github.com/c0gent/NeverClicker/issues/11:
>Issue: For some reason, I had an issue where Neverclicker went out of Neverwinter Window and the mouse cursor was moving around/clicking things on my desktop. I returned to PC and several desktop apps had been opened. Will monitor this problem and provide more info when available. | priority | mouse clicking outside of client window copy pasted from zeusome s issue for some reason i had an issue where neverclicker went out of neverwinter window and the mouse cursor was moving around clicking things on my desktop i returned to pc and several desktop apps had been opened will monitor this problem and provide more info when available | 1 |
450,509 | 13,012,627,925 | IssuesEvent | 2020-07-25 06:50:19 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | reopened | Learndash changing the Edit page links in admin bar to homepage instead of groups page | bug priority: high | **Describe the bug**
Groups edit page in front end redirects to Homepage
**To Reproduce**
Steps to reproduce the behavior:
Issue can be replicated in Demo
1. Go to Groups page (frontend)
2. Click on Edit Page
3. See error
**Expected behavior**
When editing Groups page, it should not redirect to Homepage edit
**Screenshots**
https://drive.google.com/file/d/1W9XNhXcswICRddM7ruLi8LY2I7bgfiGp/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1215499495/81761 | 1.0 | Learndash changing the Edit page links in admin bar to homepage instead of groups page - **Describe the bug**
Groups edit page in front end redirects to Homepage
**To Reproduce**
Steps to reproduce the behavior:
Issue can be replicated in Demo
1. Go to Groups page (frontend)
2. Click on Edit Page
3. See error
**Expected behavior**
When editing Groups page, it should not redirect to Homepage edit
**Screenshots**
https://drive.google.com/file/d/1W9XNhXcswICRddM7ruLi8LY2I7bgfiGp/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1215499495/81761 | priority | learndash changing the edit page links in admin bar to homepage instead of groups page describe the bug groups edit page in front end redirects to homepage to reproduce steps to reproduce the behavior issue can be replicated in demo go to groups page frontend click on edit page see error expected behavior when editing groups page it should not redirect to homepage edit screenshots support ticket links | 1 |
59,459 | 3,113,700,151 | IssuesEvent | 2015-09-03 01:29:41 | cs2103aug2015-t10-1j/main | https://api.github.com/repos/cs2103aug2015-t10-1j/main | opened | A user can add a deadline | priority.high type.story | ... so that the user can get a reminded at a specific time before the deadline to keep track of things. | 1.0 | A user can add a deadline - ... so that the user can get a reminded at a specific time before the deadline to keep track of things. | priority | a user can add a deadline so that the user can get a reminded at a specific time before the deadline to keep track of things | 1 |
85,051 | 3,684,268,050 | IssuesEvent | 2016-02-24 16:47:44 | Aurorastation/Aurora.3 | https://api.github.com/repos/Aurorastation/Aurora.3 | closed | Round start borgs not slaved to malf AI | Bug High Priority | When a roundstart borg had their job preference to to AI high and are assigned as a borg for a fallback job they don't appear to be slaved to the AI in the malf AI gametype.
This can lead to the malf AI being at a huge disadvantage, as borgs are more likely to notice hacked APCs, weird AI behaviour and can see the binary channel, one of the only private ways for the malf AI to comunicate with its slaved borgs that join midround. | 1.0 | Round start borgs not slaved to malf AI - When a roundstart borg had their job preference to to AI high and are assigned as a borg for a fallback job they don't appear to be slaved to the AI in the malf AI gametype.
This can lead to the malf AI being at a huge disadvantage, as borgs are more likely to notice hacked APCs, weird AI behaviour and can see the binary channel, one of the only private ways for the malf AI to comunicate with its slaved borgs that join midround. | priority | round start borgs not slaved to malf ai when a roundstart borg had their job preference to to ai high and are assigned as a borg for a fallback job they don t appear to be slaved to the ai in the malf ai gametype this can lead to the malf ai being at a huge disadvantage as borgs are more likely to notice hacked apcs weird ai behaviour and can see the binary channel one of the only private ways for the malf ai to comunicate with its slaved borgs that join midround | 1 |
826,904 | 31,717,321,937 | IssuesEvent | 2023-09-10 02:02:51 | Ottatop/pinnacle | https://api.github.com/repos/Ottatop/pinnacle | closed | Freezing when opening a bunch of Alacritty windows at once | bug high priority | Opening a bunch of Alacritty windows quickly will freeze the screen. This probably also happens with other windows. | 1.0 | Freezing when opening a bunch of Alacritty windows at once - Opening a bunch of Alacritty windows quickly will freeze the screen. This probably also happens with other windows. | priority | freezing when opening a bunch of alacritty windows at once opening a bunch of alacritty windows quickly will freeze the screen this probably also happens with other windows | 1 |
407,026 | 11,905,658,468 | IssuesEvent | 2020-03-30 18:57:31 | epidemics/covid | https://api.github.com/repos/epidemics/covid | opened | Smaller/no graph legend on mobile | high-priority | 60% of our users are on mobile, so it's important we support this use case.
However, currently the legend hides a lot of the graph:

I don't know how to solve this, but we should do _something_ about it. | 1.0 | Smaller/no graph legend on mobile - 60% of our users are on mobile, so it's important we support this use case.
However, currently the legend hides a lot of the graph:

I don't know how to solve this, but we should do _something_ about it. | priority | smaller no graph legend on mobile of our users are on mobile so it s important we support this use case however currently the legend hides a lot of the graph i don t know how to solve this but we should do something about it | 1 |
522,270 | 15,158,288,837 | IssuesEvent | 2021-02-12 00:46:45 | NOAA-GSL/MATS | https://api.github.com/repos/NOAA-GSL/MATS | closed | Single/multi-station plots display data an hour too early | Priority: High Project: MATS Status: Closed Type: Bug | ---
Author Name: **molly.b.smith** (@mollybsmith-noaa)
Original Redmine Issue: 71103, https://vlab.ncep.noaa.gov/redmine/issues/71103
Original Date: 2019-11-08
Original Assignee: molly.b.smith
---
Timeseries plots of station data seem to be shifted an hour earlier than they should be.
| 1.0 | Single/multi-station plots display data an hour too early - ---
Author Name: **molly.b.smith** (@mollybsmith-noaa)
Original Redmine Issue: 71103, https://vlab.ncep.noaa.gov/redmine/issues/71103
Original Date: 2019-11-08
Original Assignee: molly.b.smith
---
Timeseries plots of station data seem to be shifted an hour earlier than they should be.
| priority | single multi station plots display data an hour too early author name molly b smith mollybsmith noaa original redmine issue original date original assignee molly b smith timeseries plots of station data seem to be shifted an hour earlier than they should be | 1 |
606,049 | 18,753,971,881 | IssuesEvent | 2021-11-05 08:15:25 | AY2122S1-CS2113T-F14-1/tp | https://api.github.com/repos/AY2122S1-CS2113T-F14-1/tp | closed | [PE-D] Unhandled exception during edit of habits | priority.High | Upon changing of habit's interval to -1. Program crashes.

<!--session: 1635497040570-46ef78bd-b586-4c38-806b-ab369c91a526-->
<!--Version: Web v3.4.1-->
-------------
Labels: `type.FunctionalityBug` `severity.High`
original: andrewtkh1/ped#5 | 1.0 | [PE-D] Unhandled exception during edit of habits - Upon changing of habit's interval to -1. Program crashes.

<!--session: 1635497040570-46ef78bd-b586-4c38-806b-ab369c91a526-->
<!--Version: Web v3.4.1-->
-------------
Labels: `type.FunctionalityBug` `severity.High`
original: andrewtkh1/ped#5 | priority | unhandled exception during edit of habits upon changing of habit s interval to program crashes labels type functionalitybug severity high original ped | 1 |
115,305 | 4,662,924,288 | IssuesEvent | 2016-10-05 07:08:23 | CovertJaguar/Railcraft | https://api.github.com/repos/CovertJaguar/Railcraft | closed | Broken Forestry Module | bug priority-high | ```
org.apache.logging.log4j.core.appender.AppenderLoggingException: An exception occurred processing Appender ServerGuiConsole
at org.apache.logging.log4j.core.appender.DefaultErrorHandler.error(DefaultErrorHandler.java:73)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:101)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:425)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:406)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367)
at org.apache.logging.log4j.core.Logger.log(Logger.java:110)
at org.apache.logging.log4j.spi.AbstractLogger.log(AbstractLogger.java:1362)
at mods.railcraft.common.util.misc.Game.log(Game.java:80)
at mods.railcraft.common.util.misc.Game.log(Game.java:76)
at mods.railcraft.common.util.misc.Game.logThrowable(Game.java:115)
at mods.railcraft.common.util.misc.Game.logErrorAPI(Game.java:129)
at mods.railcraft.common.plugins.forestry.ForestryPlugin$ForestryPluginInstalled.addCarpenterRecipe(ForestryPlugin.java:294)
at mods.railcraft.common.modules.ModuleForestry$1.postInit(ModuleForestry.java:52)
at mods.railcraft.common.modules.RailcraftModulePayload$BaseModuleEventHandler.postInit(RailcraftModulePayload.java:82)
at mods.railcraft.common.modules.RailcraftModuleManager$Stage$4.passToModule(RailcraftModuleManager.java:278)
at mods.railcraft.common.modules.RailcraftModuleManager.processStage(RailcraftModuleManager.java:223)
at mods.railcraft.common.modules.RailcraftModuleManager.postInit(RailcraftModuleManager.java:210)
at mods.railcraft.common.core.Railcraft.postInit(Railcraft.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at net.minecraftforge.fml.common.FMLModContainer.handleModStateEvent(FMLModContainer.java:597)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at net.minecraftforge.fml.common.LoadController.sendEventToModContainer(LoadController.java:239)
at net.minecraftforge.fml.common.LoadController.propogateStateMessage(LoadController.java:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at net.minecraftforge.fml.common.LoadController.distributeStateMessage(LoadController.java:142)
at net.minecraftforge.fml.common.Loader.initializeMods(Loader.java:795)
at net.minecraftforge.fml.server.FMLServerHandler.finishServerLoading(FMLServerHandler.java:107)
at net.minecraftforge.fml.common.FMLCommonHandler.onServerStarted(FMLCommonHandler.java:333)
at net.minecraft.server.dedicated.DedicatedServer.func_71197_b(DedicatedServer.java:214)
at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:431)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: can't parse argument number: concrete
at java.text.MessageFormat.makeFormat(MessageFormat.java:1429)
at java.text.MessageFormat.applyPattern(MessageFormat.java:479)
at java.text.MessageFormat.<init>(MessageFormat.java:362)
at java.text.MessageFormat.format(MessageFormat.java:840)
at org.apache.logging.log4j.message.MessageFormatMessage.formatMessage(MessageFormatMessage.java:89)
at org.apache.logging.log4j.message.MessageFormatMessage.getFormattedMessage(MessageFormatMessage.java:61)
at org.apache.logging.log4j.core.pattern.MessagePatternConverter.format(MessagePatternConverter.java:68)
at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36)
at org.apache.logging.log4j.core.pattern.RegexReplacementConverter.format(RegexReplacementConverter.java:93)
at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36)
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:167)
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:52)
at com.mojang.util.QueueLogAppender.append(QueueLogAppender.java:39)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:99)
... 47 more
Caused by: java.lang.NumberFormatException: For input string: "concrete"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at java.text.MessageFormat.makeFormat(MessageFormat.java:1427)
... 60 more
``` | 1.0 | Broken Forestry Module - ```
org.apache.logging.log4j.core.appender.AppenderLoggingException: An exception occurred processing Appender ServerGuiConsole
at org.apache.logging.log4j.core.appender.DefaultErrorHandler.error(DefaultErrorHandler.java:73)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:101)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:425)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:406)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367)
at org.apache.logging.log4j.core.Logger.log(Logger.java:110)
at org.apache.logging.log4j.spi.AbstractLogger.log(AbstractLogger.java:1362)
at mods.railcraft.common.util.misc.Game.log(Game.java:80)
at mods.railcraft.common.util.misc.Game.log(Game.java:76)
at mods.railcraft.common.util.misc.Game.logThrowable(Game.java:115)
at mods.railcraft.common.util.misc.Game.logErrorAPI(Game.java:129)
at mods.railcraft.common.plugins.forestry.ForestryPlugin$ForestryPluginInstalled.addCarpenterRecipe(ForestryPlugin.java:294)
at mods.railcraft.common.modules.ModuleForestry$1.postInit(ModuleForestry.java:52)
at mods.railcraft.common.modules.RailcraftModulePayload$BaseModuleEventHandler.postInit(RailcraftModulePayload.java:82)
at mods.railcraft.common.modules.RailcraftModuleManager$Stage$4.passToModule(RailcraftModuleManager.java:278)
at mods.railcraft.common.modules.RailcraftModuleManager.processStage(RailcraftModuleManager.java:223)
at mods.railcraft.common.modules.RailcraftModuleManager.postInit(RailcraftModuleManager.java:210)
at mods.railcraft.common.core.Railcraft.postInit(Railcraft.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at net.minecraftforge.fml.common.FMLModContainer.handleModStateEvent(FMLModContainer.java:597)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at net.minecraftforge.fml.common.LoadController.sendEventToModContainer(LoadController.java:239)
at net.minecraftforge.fml.common.LoadController.propogateStateMessage(LoadController.java:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at net.minecraftforge.fml.common.LoadController.distributeStateMessage(LoadController.java:142)
at net.minecraftforge.fml.common.Loader.initializeMods(Loader.java:795)
at net.minecraftforge.fml.server.FMLServerHandler.finishServerLoading(FMLServerHandler.java:107)
at net.minecraftforge.fml.common.FMLCommonHandler.onServerStarted(FMLCommonHandler.java:333)
at net.minecraft.server.dedicated.DedicatedServer.func_71197_b(DedicatedServer.java:214)
at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:431)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: can't parse argument number: concrete
at java.text.MessageFormat.makeFormat(MessageFormat.java:1429)
at java.text.MessageFormat.applyPattern(MessageFormat.java:479)
at java.text.MessageFormat.<init>(MessageFormat.java:362)
at java.text.MessageFormat.format(MessageFormat.java:840)
at org.apache.logging.log4j.message.MessageFormatMessage.formatMessage(MessageFormatMessage.java:89)
at org.apache.logging.log4j.message.MessageFormatMessage.getFormattedMessage(MessageFormatMessage.java:61)
at org.apache.logging.log4j.core.pattern.MessagePatternConverter.format(MessagePatternConverter.java:68)
at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36)
at org.apache.logging.log4j.core.pattern.RegexReplacementConverter.format(RegexReplacementConverter.java:93)
at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36)
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:167)
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:52)
at com.mojang.util.QueueLogAppender.append(QueueLogAppender.java:39)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:99)
... 47 more
Caused by: java.lang.NumberFormatException: For input string: "concrete"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at java.text.MessageFormat.makeFormat(MessageFormat.java:1427)
... 60 more
``` | priority | broken forestry module org apache logging core appender appenderloggingexception an exception occurred processing appender serverguiconsole at org apache logging core appender defaulterrorhandler error defaulterrorhandler java at org apache logging core config appendercontrol callappender appendercontrol java at org apache logging core config loggerconfig callappenders loggerconfig java at org apache logging core config loggerconfig log loggerconfig java at org apache logging core config loggerconfig log loggerconfig java at org apache logging core logger log logger java at org apache logging spi abstractlogger log abstractlogger java at mods railcraft common util misc game log game java at mods railcraft common util misc game log game java at mods railcraft common util misc game logthrowable game java at mods railcraft common util misc game logerrorapi game java at mods railcraft common plugins forestry forestryplugin forestryplugininstalled addcarpenterrecipe forestryplugin java at mods railcraft common modules moduleforestry postinit moduleforestry java at mods railcraft common modules railcraftmodulepayload basemoduleeventhandler postinit railcraftmodulepayload java at mods railcraft common modules railcraftmodulemanager stage passtomodule railcraftmodulemanager java at mods railcraft common modules railcraftmodulemanager processstage railcraftmodulemanager java at mods railcraft common modules railcraftmodulemanager postinit railcraftmodulemanager java at mods railcraft common core railcraft postinit railcraft java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at net minecraftforge fml common fmlmodcontainer handlemodstateevent fmlmodcontainer java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com google common eventbus eventsubscriber handleevent eventsubscriber java at com google common eventbus synchronizedeventsubscriber handleevent synchronizedeventsubscriber java at com google common eventbus eventbus dispatch eventbus java at com google common eventbus eventbus dispatchqueuedevents eventbus java at com google common eventbus eventbus post eventbus java at net minecraftforge fml common loadcontroller sendeventtomodcontainer loadcontroller java at net minecraftforge fml common loadcontroller propogatestatemessage loadcontroller java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com google common eventbus eventsubscriber handleevent eventsubscriber java at com google common eventbus synchronizedeventsubscriber handleevent synchronizedeventsubscriber java at com google common eventbus eventbus dispatch eventbus java at com google common eventbus eventbus dispatchqueuedevents eventbus java at com google common eventbus eventbus post eventbus java at net minecraftforge fml common loadcontroller distributestatemessage loadcontroller java at net minecraftforge fml common loader initializemods loader java at net minecraftforge fml server fmlserverhandler finishserverloading fmlserverhandler java at net minecraftforge fml common fmlcommonhandler onserverstarted fmlcommonhandler java at net minecraft server dedicated dedicatedserver func b dedicatedserver java at net minecraft server minecraftserver run minecraftserver java at java lang thread run thread java caused by java lang illegalargumentexception can t parse argument number concrete at java text messageformat makeformat messageformat java at java text messageformat applypattern messageformat java at java text messageformat messageformat java at java text messageformat format messageformat java at org apache logging message messageformatmessage formatmessage messageformatmessage java at org apache logging message messageformatmessage getformattedmessage messageformatmessage java at org apache logging core pattern messagepatternconverter format messagepatternconverter java at org apache logging core pattern patternformatter format patternformatter java at org apache logging core pattern regexreplacementconverter format regexreplacementconverter java at org apache logging core pattern patternformatter format patternformatter java at org apache logging core layout patternlayout toserializable patternlayout java at org apache logging core layout patternlayout toserializable patternlayout java at com mojang util queuelogappender append queuelogappender java at org apache logging core config appendercontrol callappender appendercontrol java more caused by java lang numberformatexception for input string concrete at java lang numberformatexception forinputstring numberformatexception java at java lang integer parseint integer java at java lang integer parseint integer java at java text messageformat makeformat messageformat java more | 1 |
695,217 | 23,849,169,551 | IssuesEvent | 2022-09-06 16:17:26 | huridocs/uwazi | https://api.github.com/repos/huridocs/uwazi | closed | 'Upload PDF' button doesn't change the uploading status when selecting other entity | Bug :lady_beetle: Sprint Priority: High Frontend :sunglasses: | **Describe the bug**
The label of the 'Upload PDF' button keeps the status of the last executed action across entities, ie: After a fail uploading a document, 'Upload PDF' keeps with the "An error occurred" label when selecting another entity.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the Library
2. Add an invalid document to an existing entity (the 'Upload button' will show the label 'An error occurred')
3. Select another existing entity
4. The button remains with the label 'An error occurred'
**Expected behavior**
When selecting another entity the button should show its status, by default, the action to add a new document (the same in the case of selecting again the document with the failure?)
**Screenshots**
Entity with error
<img width="988" alt="Screen Shot 2022-08-16 at 11 49 18" src="https://user-images.githubusercontent.com/5322716/184935007-fba309e9-77cf-4815-bc68-c74bf02ebcf0.png">
Another entity
<img width="1024" alt="Screen Shot 2022-08-16 at 11 50 03" src="https://user-images.githubusercontent.com/5322716/184935023-ab57cdbe-827d-4329-ab18-6a6a8c504874.png">
**Device (please select all that apply)**
- [x] Desktop
- [x] Mobile
**Browser**
Firefox, Chrome
| 1.0 | 'Upload PDF' button doesn't change the uploading status when selecting other entity - **Describe the bug**
The label of the 'Upload PDF' button keeps the status of the last executed action across entities, ie: After a fail uploading a document, 'Upload PDF' keeps with the "An error occurred" label when selecting another entity.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the Library
2. Add an invalid document to an existing entity (the 'Upload button' will show the label 'An error occurred')
3. Select another existing entity
4. The button remains with the label 'An error occurred'
**Expected behavior**
When selecting another entity the button should show its status, by default, the action to add a new document (the same in the case of selecting again the document with the failure?)
**Screenshots**
Entity with error
<img width="988" alt="Screen Shot 2022-08-16 at 11 49 18" src="https://user-images.githubusercontent.com/5322716/184935007-fba309e9-77cf-4815-bc68-c74bf02ebcf0.png">
Another entity
<img width="1024" alt="Screen Shot 2022-08-16 at 11 50 03" src="https://user-images.githubusercontent.com/5322716/184935023-ab57cdbe-827d-4329-ab18-6a6a8c504874.png">
**Device (please select all that apply)**
- [x] Desktop
- [x] Mobile
**Browser**
Firefox, Chrome
| priority | upload pdf button doesn t change the uploading status when selecting other entity describe the bug the label of the upload pdf button keeps the status of the last executed action across entities ie after a fail uploading a document upload pdf keeps with the an error occurred label when selecting another entity to reproduce steps to reproduce the behavior go to the library add an invalid document to an existing entity the upload button will show the label an error occurred select another existing entity the button remains with the label an error occurred expected behavior when selecting another entity the button should show its status by default the action to add a new document the same in the case of selecting again the document with the failure screenshots entity with error img width alt screen shot at src another entity img width alt screen shot at src device please select all that apply desktop mobile browser firefox chrome | 1 |
466,250 | 13,398,987,017 | IssuesEvent | 2020-09-03 13:54:15 | protofire/omen-exchange | https://api.github.com/repos/protofire/omen-exchange | closed | 'My Markets' not displaying markets that a user has created | bug priority:high | I assume this came about from making the change to start displaying markets that users have interacted with in the 'My Markets' view, but for some reason, it no longer returns a user's created markets. | 1.0 | 'My Markets' not displaying markets that a user has created - I assume this came about from making the change to start displaying markets that users have interacted with in the 'My Markets' view, but for some reason, it no longer returns a user's created markets. | priority | my markets not displaying markets that a user has created i assume this came about from making the change to start displaying markets that users have interacted with in the my markets view but for some reason it no longer returns a user s created markets | 1 |
444,629 | 12,815,186,294 | IssuesEvent | 2020-07-05 00:17:41 | acl-org/acl-2020-virtual-conference | https://api.github.com/repos/acl-org/acl-2020-virtual-conference | closed | Workshop: Open each workshop paper talk in a new page | priority:high volunteer needed | Currently, all pre-recorded talks of a workshop are all displayed in one page (e.g. https://virtual.acl2020.org/workshop_W1.html has about 50 talks all in one page).
Can we instead have a separate page for each talk and just have a link to that page in the main page? | 1.0 | Workshop: Open each workshop paper talk in a new page - Currently, all pre-recorded talks of a workshop are all displayed in one page (e.g. https://virtual.acl2020.org/workshop_W1.html has about 50 talks all in one page).
Can we instead have a separate page for each talk and just have a link to that page in the main page? | priority | workshop open each workshop paper talk in a new page currently all pre recorded talks of a workshop are all displayed in one page e g has about talks all in one page can we instead have a separate page for each talk and just have a link to that page in the main page | 1 |
625,236 | 19,722,928,365 | IssuesEvent | 2022-01-13 17:00:36 | DiscordBot-PMMP/DiscordBot | https://api.github.com/repos/DiscordBot-PMMP/DiscordBot | closed | Better description | Priority: High Type: Suggestion Status: In Progress | I cant understand what this plugin do it is a lib or something else? I dont think it is lib since it has token in resources but idk what this plugin do and how do we use this pls we need a better description to understand. | 1.0 | Better description - I cant understand what this plugin do it is a lib or something else? I dont think it is lib since it has token in resources but idk what this plugin do and how do we use this pls we need a better description to understand. | priority | better description i cant understand what this plugin do it is a lib or something else i dont think it is lib since it has token in resources but idk what this plugin do and how do we use this pls we need a better description to understand | 1 |
65,292 | 3,227,386,968 | IssuesEvent | 2015-10-11 05:10:20 | sohelvali/Test-Git-Issue | https://api.github.com/repos/sohelvali/Test-Git-Issue | closed | Uploaded document (Astellas) does not map Clinical Pharmacology Section | Completed High Priority Upload | Screenshot and uploaded doc attached. | 1.0 | Uploaded document (Astellas) does not map Clinical Pharmacology Section - Screenshot and uploaded doc attached. | priority | uploaded document astellas does not map clinical pharmacology section screenshot and uploaded doc attached | 1 |
312,048 | 9,542,337,350 | IssuesEvent | 2019-05-01 03:16:11 | vectorlit/unofficial_gencon_mobile | https://api.github.com/repos/vectorlit/unofficial_gencon_mobile | opened | Feature/Bug Fix: Database speed is slow | bug enhancement high priority | The database speed - both on commits and searches - is too slow. Whether this is related to the SQFlite (SQLite) implementation, the single-threaded nature of the app, the serialization/deserializations, or anything else is unknown. But the goal of this issue is to resolve general sluggishness in download commits, searches, and search result list bindings.
Use cases/examples:
- Displaying ALL events in the database (typically ~18,000 events) should take 3-5 seconds on a modern device. It currently takes ~45 seconds.
- Committing ALL events to the database, including network download (typically ~18,000 events) should take ~30 seconds on a modern device. It currently takes ~2 minutes. | 1.0 | Feature/Bug Fix: Database speed is slow - The database speed - both on commits and searches - is too slow. Whether this is related to the SQFlite (SQLite) implementation, the single-threaded nature of the app, the serialization/deserializations, or anything else is unknown. But the goal of this issue is to resolve general sluggishness in download commits, searches, and search result list bindings.
Use cases/examples:
- Displaying ALL events in the database (typically ~18,000 events) should take 3-5 seconds on a modern device. It currently takes ~45 seconds.
- Committing ALL events to the database, including network download (typically ~18,000 events) should take ~30 seconds on a modern device. It currently takes ~2 minutes. | priority | feature bug fix database speed is slow the database speed both on commits and searches is too slow whether this is related to the sqflite sqlite implementation the single threaded nature of the app the serialization deserializations or anything else is unknown but the goal of this issue is to resolve general sluggishness in download commits searches and search result list bindings use cases examples displaying all events in the database typically events should take seconds on a modern device it currently takes seconds committing all events to the database including network download typically events should take seconds on a modern device it currently takes minutes | 1 |
482,742 | 13,912,512,230 | IssuesEvent | 2020-10-20 18:59:13 | OpenLiberty/ci.maven | https://api.github.com/repos/OpenLiberty/ci.maven | closed | Track Dockerfile changes and rebuild image on changes | devMode devModeContainers enhancement high priority | Track changes to the Dockerfile itself and rebuild/restart container. | 1.0 | Track Dockerfile changes and rebuild image on changes - Track changes to the Dockerfile itself and rebuild/restart container. | priority | track dockerfile changes and rebuild image on changes track changes to the dockerfile itself and rebuild restart container | 1 |
726,931 | 25,016,793,225 | IssuesEvent | 2022-11-03 19:34:59 | Zefau/ioBroker.jarvis | https://api.github.com/repos/Zefau/ioBroker.jarvis | closed | Nach Update auf v3.1.0-beta.8 teilweise keine Values mehr sichtbar | :bug: bug HIGH PRIORITY | Nach dem Update von v3.1.0-beta.6 auf v3.1.0-beta.8 sehen meine Sreens recht traurig aus:
Viele Values und teilweise auch Icons werden nicht mehr geladen. Hier ein Beispiel, in dem eigentlich viele Werte stehen:

Browser (Edge) zeigt dies:

| 1.0 | Nach Update auf v3.1.0-beta.8 teilweise keine Values mehr sichtbar - Nach dem Update von v3.1.0-beta.6 auf v3.1.0-beta.8 sehen meine Sreens recht traurig aus:
Viele Values und teilweise auch Icons werden nicht mehr geladen. Hier ein Beispiel, in dem eigentlich viele Werte stehen:

Browser (Edge) zeigt dies:

| priority | nach update auf beta teilweise keine values mehr sichtbar nach dem update von beta auf beta sehen meine sreens recht traurig aus viele values und teilweise auch icons werden nicht mehr geladen hier ein beispiel in dem eigentlich viele werte stehen browser edge zeigt dies | 1 |
31,883 | 2,740,845,460 | IssuesEvent | 2015-04-21 06:51:22 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | closed | Custom location page: review of topline numbers and charts section | Priority-High | I think we need to review the way we are using the topline numbers and charts section to support several cases:
- topline name on 2 rows
- topline source on 2 rows
- different number of toplines vs charts | 1.0 | Custom location page: review of topline numbers and charts section - I think we need to review the way we are using the topline numbers and charts section to support several cases:
- topline name on 2 rows
- topline source on 2 rows
- different number of toplines vs charts | priority | custom location page review of topline numbers and charts section i think we need to review the way we are using the topline numbers and charts section to support several cases topline name on rows topline source on rows different number of toplines vs charts | 1 |
278,350 | 8,639,703,867 | IssuesEvent | 2018-11-23 20:53:46 | tophat/yvm | https://api.github.com/repos/tophat/yvm | closed | Getting a segfault when trying to do a `yarn install` using yvm | bug help wanted high priority must have | **Describe the bug**
When I run `yarn install` with `yvm` I'm getting a segfault:
```
75162/91112/Users/******/.yvm/yvm.sh: line 82: 60566 Segmentation fault: 11 node "${YVM_DIR}/yvm.js" $@
make: *** [node_modules] Error 139
``` | 1.0 | Getting a segfault when trying to do a `yarn install` using yvm - **Describe the bug**
When I run `yarn install` with `yvm` I'm getting a segfault:
```
75162/91112/Users/******/.yvm/yvm.sh: line 82: 60566 Segmentation fault: 11 node "${YVM_DIR}/yvm.js" $@
make: *** [node_modules] Error 139
``` | priority | getting a segfault when trying to do a yarn install using yvm describe the bug when i run yarn install with yvm i m getting a segfault users yvm yvm sh line segmentation fault node yvm dir yvm js make error | 1 |
696,299 | 23,895,880,359 | IssuesEvent | 2022-09-08 14:44:58 | pystardust/ani-cli | https://api.github.com/repos/pystardust/ani-cli | closed | When I searched for a specific anime, the result returns to 'video URL not found' | type: bug category: url priority 1: high | **Metadata (please complete the following information)**
Version: 3.3
OS: Windows 10
Shell: git bash
Anime: Ore Dake Haireru Kakushi Dungeon
**Describe the bug**
After I enter the name of the anime and selected episode, it outputs hex errors and then results to 'video URL not found'
**Steps To Reproduce**
1. Open git bash normally
2. Run ani-cli
3. Input 'ore dake'
4. choose 4
5. choose 7 (episode)
**Expected behavior**

| 1.0 | When I searched for a specific anime, the result returns to 'video URL not found' - **Metadata (please complete the following information)**
Version: 3.3
OS: Windows 10
Shell: git bash
Anime: Ore Dake Haireru Kakushi Dungeon
**Describe the bug**
After I enter the name of the anime and selected episode, it outputs hex errors and then results to 'video URL not found'
**Steps To Reproduce**
1. Open git bash normally
2. Run ani-cli
3. Input 'ore dake'
4. choose 4
5. choose 7 (episode)
**Expected behavior**

| priority | when i searched for a specific anime the result returns to video url not found metadata please complete the following information version os windows shell git bash anime ore dake haireru kakushi dungeon describe the bug after i enter the name of the anime and selected episode it outputs hex errors and then results to video url not found steps to reproduce open git bash normally run ani cli input ore dake choose choose episode expected behavior | 1 |
25,373 | 2,680,828,958 | IssuesEvent | 2015-03-27 02:14:51 | JacobFrericks/HabitRPG-Native-Client | https://api.github.com/repos/JacobFrericks/HabitRPG-Native-Client | opened | "Does Not Repeat" text doesn't change when repeating | bug priority: high | Expected:
- add task
- set repeating to anything
- text should say "Repeating Monday, Wedensday" or "Repeating Weekdays" etc
Actual:
- add task
- set repeating to anything
- text still says "Does Not Repeat" | 1.0 | "Does Not Repeat" text doesn't change when repeating - Expected:
- add task
- set repeating to anything
- text should say "Repeating Monday, Wedensday" or "Repeating Weekdays" etc
Actual:
- add task
- set repeating to anything
- text still says "Does Not Repeat" | priority | does not repeat text doesn t change when repeating expected add task set repeating to anything text should say repeating monday wedensday or repeating weekdays etc actual add task set repeating to anything text still says does not repeat | 1 |
390,171 | 11,525,808,822 | IssuesEvent | 2020-02-15 11:08:57 | wso2/docs-is | https://api.github.com/repos/wso2/docs-is | closed | Grammar and Spelling errors in the Configuring SMSOTP doc | Affected/5.10.0 Priority/High Severity/Minor | **Description:**
Noted several grammar and spelling errors in [[1]](https://is.docs.wso2.com/en/next/learn/configuring-sms-otp/).
1. In **Enable SMSOTP** section > Value/Description table > row 2, column 2, "diable" should be "disable"
2. In **Enable SMSOTP** section > Value/Description table > row 7, column 2, "SMMSOTP" should be "SMSOTP"
3. In **Enable SMSOTP** section > Value/Description table > row 13, column 2, "unable" should be "enable"
4. In **Configuring the service provider** section > numbered point 1, "Navigate to Main tab -> Identity -> Identity Providers -> Add" should be "Navigate to Main tab -> Identity -> Service Providers -> Add" | 1.0 | Grammar and Spelling errors in the Configuring SMSOTP doc - **Description:**
Noted several grammar and spelling errors in [[1]](https://is.docs.wso2.com/en/next/learn/configuring-sms-otp/).
1. In **Enable SMSOTP** section > Value/Description table > row 2, column 2, "diable" should be "disable"
2. In **Enable SMSOTP** section > Value/Description table > row 7, column 2, "SMMSOTP" should be "SMSOTP"
3. In **Enable SMSOTP** section > Value/Description table > row 13, column 2, "unable" should be "enable"
4. In **Configuring the service provider** section > numbered point 1, "Navigate to Main tab -> Identity -> Identity Providers -> Add" should be "Navigate to Main tab -> Identity -> Service Providers -> Add" | priority | grammar and spelling errors in the configuring smsotp doc description noted several grammar and spelling errors in in enable smsotp section value description table row column diable should be disable in enable smsotp section value description table row column smmsotp should be smsotp in enable smsotp section value description table row column unable should be enable in configuring the service provider section numbered point navigate to main tab identity identity providers add should be navigate to main tab identity service providers add | 1 |
653,690 | 21,610,542,926 | IssuesEvent | 2022-05-04 09:38:32 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Password Reset confirmationcode not invalidating after first use | Priority/High Severity/Major bug 5.12.0-bug-fixing | **Describe the issue:**
We are able to use the same password reset email link to access the challenge questions and successfully reset user passwords multiple times before the 1440 minutes are up on its time expiration.
**How to reproduce:**
Step 1: Select 'Forgot Password?'
Step 2: Input email address
Step 3: Follow link in email to Change Password

Step 4: Correctly answer challenge questions and change password
Step 5: Click on same link which includes same confirmation code displayed in image above
Step 6: Correctly answer challenge questions and change password again
**Expected behavior:**
We expect that at Step 5 the confirmation code should have already been invalidated upon resetting the password in step 4 so the link should no longer allow you to change the password again
**Environment information:**
- Product Version: ISKM 5.10.0
- OS: RHEL 7.9
- Database: postgres13
- Userstore: JDBC
| 1.0 | Password Reset confirmationcode not invalidating after first use - **Describe the issue:**
We are able to use the same password reset email link to access the challenge questions and successfully reset user passwords multiple times before the 1440 minutes are up on its time expiration.
**How to reproduce:**
Step 1: Select 'Forgot Password?'
Step 2: Input email address
Step 3: Follow link in email to Change Password

Step 4: Correctly answer challenge questions and change password
Step 5: Click on same link which includes same confirmation code displayed in image above
Step 6: Correctly answer challenge questions and change password again
**Expected behavior:**
We expect that at Step 5 the confirmation code should have already been invalidated upon resetting the password in step 4 so the link should no longer allow you to change the password again
**Environment information:**
- Product Version: ISKM 5.10.0
- OS: RHEL 7.9
- Database: postgres13
- Userstore: JDBC
| priority | password reset confirmationcode not invalidating after first use describe the issue we are able to use the same password reset email link to access the challenge questions and successfully reset user passwords multiple times before the minutes are up on its time expiration how to reproduce step select forgot password step input email address step follow link in email to change password step correctly answer challenge questions and change password step click on same link which includes same confirmation code displayed in image above step correctly answer challenge questions and change password again expected behavior we expect that at step the confirmation code should have already been invalidated upon resetting the password in step so the link should no longer allow you to change the password again environment information product version iskm os rhel database userstore jdbc | 1 |
322,963 | 9,834,373,935 | IssuesEvent | 2019-06-17 09:32:25 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Issue with rank math,index should be same in both AMP and in Non-AMP in custom front page | NEED FAST REVIEW [Priority: HIGH] bug waiting | Des: Issue with rank math, the index should be same in both AMP and in Non-AMP in the custom front page
Ref: https://secure.helpscout.net/conversation/857663247/67968/
The below settings file is the rank math settings file at the user end, the issue is created in the local also.
[rank-math-settings-2019-06-04-08-12-18.txt](https://github.com/ahmedkaludi/accelerated-mobile-pages/files/3251330/rank-math-settings-2019-06-04-08-12-18.txt)
| 1.0 | Issue with rank math,index should be same in both AMP and in Non-AMP in custom front page - Des: Issue with rank math, the index should be same in both AMP and in Non-AMP in the custom front page
Ref: https://secure.helpscout.net/conversation/857663247/67968/
The below settings file is the rank math settings file at the user end, the issue is created in the local also.
[rank-math-settings-2019-06-04-08-12-18.txt](https://github.com/ahmedkaludi/accelerated-mobile-pages/files/3251330/rank-math-settings-2019-06-04-08-12-18.txt)
| priority | issue with rank math index should be same in both amp and in non amp in custom front page des issue with rank math the index should be same in both amp and in non amp in the custom front page ref the below settings file is the rank math settings file at the user end the issue is created in the local also | 1 |
187,619 | 6,759,735,909 | IssuesEvent | 2017-10-24 18:08:16 | blueprintmedicines/centromere | https://api.github.com/repos/blueprintmedicines/centromere | opened | Data Import Commons: Mutation Readers | Module: Core Priority: High Status: Investigating Type: Enhancement | Need better commons classes for reading mutation data files. This is tricky, since even though there are standards for mutation file formats (eg. [MAF](https://wiki.nci.nih.gov/display/TCGA/Mutation+Annotation+Format+(MAF)+Specification) and [VCF](http://www.internationalgenome.org/wiki/Analysis/Variant%20Call%20Format/vcf-variant-call-format-version-40/)), these are not REALLY standard, and a lot of the key fields vary by the tool used to generate them.
Suggested solution: generic, abstract implementation of readers for each file format that cover the common field, and subclass implementations that cover specific tool output formats (eg. Mutect or SNPEFF). | 1.0 | Data Import Commons: Mutation Readers - Need better commons classes for reading mutation data files. This is tricky, since even though there are standards for mutation file formats (eg. [MAF](https://wiki.nci.nih.gov/display/TCGA/Mutation+Annotation+Format+(MAF)+Specification) and [VCF](http://www.internationalgenome.org/wiki/Analysis/Variant%20Call%20Format/vcf-variant-call-format-version-40/)), these are not REALLY standard, and a lot of the key fields vary by the tool used to generate them.
Suggested solution: generic, abstract implementation of readers for each file format that cover the common field, and subclass implementations that cover specific tool output formats (eg. Mutect or SNPEFF). | priority | data import commons mutation readers need better commons classes for reading mutation data files this is tricky since even though there are standards for mutation file formats eg and these are not really standard and a lot of the key fields vary by the tool used to generate them suggested solution generic abstract implementation of readers for each file format that cover the common field and subclass implementations that cover specific tool output formats eg mutect or snpeff | 1 |
616,802 | 19,321,170,315 | IssuesEvent | 2021-12-14 05:51:56 | CDCgov/prime-reportstream | https://api.github.com/repos/CDCgov/prime-reportstream | closed | Tribal Citizenship is not reported correctly in the HL7 ReportStream produces | bug onboarding-ops HL7 high-priority data-issue | **Describe the bug**
The tribal citizenship is not coded correctly in the HL7 ReportStream produces. Specifically, the PID-39 field is missing PID-39.2 and PID-39.3.
**Impact**
WA has complained that their system is not able to digest the PID-39 from ReportStream.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
PID-39 should be <tribe_id>^<tribe_name>^HL70171
**Additional context**
Problem reported by WA.
The `pdi-covid-19-0f6373fe-3695-4ca5-ad74-eab12ab1c19b-20211122163501.csv` report from simple_report contains a couple of real data with tribal affiliation set.
| 1.0 | Tribal Citizenship is not reported correctly in the HL7 ReportStream produces - **Describe the bug**
The tribal citizenship is not coded correctly in the HL7 ReportStream produces. Specifically, the PID-39 field is missing PID-39.2 and PID-39.3.
**Impact**
WA has complained that their system is not able to digest the PID-39 from ReportStream.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
PID-39 should be <tribe_id>^<tribe_name>^HL70171
**Additional context**
Problem reported by WA.
The `pdi-covid-19-0f6373fe-3695-4ca5-ad74-eab12ab1c19b-20211122163501.csv` report from simple_report contains a couple of real data with tribal affiliation set.
| priority | tribal citizenship is not reported correctly in the reportstream produces describe the bug the tribal citizenship is not coded correctly in the reportstream produces specifically the pid field is missing pid and pid impact wa has complained that their system is not able to digest the pid from reportstream to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior pid should be additional context problem reported by wa the pdi covid csv report from simple report contains a couple of real data with tribal affiliation set | 1 |
372,153 | 11,009,808,183 | IssuesEvent | 2019-12-04 13:26:22 | firecracker-microvm/firecracker | https://api.github.com/repos/firecracker-microvm/firecracker | closed | Ratelimiter: Very high CPU usage when ratelimiter is throttling guest RX. | Feature: IO Virtualization Performance: IO Priority: High Quality: Bug | When rate limiting is enabled and the guest is receiving a lot of traffic that triggers throttling the emulation thread will use 100% CPU.
There are 2 possible approaches here:
- quick fix: keep the current code structure and configure edge triggered epoll for the tap fd.
- proper fix: rework the current state machine and use edge triggered epoll
| 1.0 | Ratelimiter: Very high CPU usage when ratelimiter is throttling guest RX. - When rate limiting is enabled and the guest is receiving a lot of traffic that triggers throttling the emulation thread will use 100% CPU.
There are 2 possible approaches here:
- quick fix: keep the current code structure and configure edge triggered epoll for the tap fd.
- proper fix: rework the current state machine and use edge triggered epoll
| priority | ratelimiter very high cpu usage when ratelimiter is throttling guest rx when rate limiting is enabled and the guest is receiving a lot of traffic that triggers throttling the emulation thread will use cpu there are possible approaches here quick fix keep the current code structure and configure edge triggered epoll for the tap fd proper fix rework the current state machine and use edge triggered epoll | 1 |
639,295 | 20,750,863,929 | IssuesEvent | 2022-03-15 07:20:28 | functionland/photos | https://api.github.com/repos/functionland/photos | closed | Improve the photos list animations (600$ Bounty) (aka gallery pinch/zoom) | bug enhancement help wanted High Priority | ## What issue do we need to solve?
We are going to improve the photos list animations when pinching and zooming (in/out) the list to change the number of columns (View Type).
Since the Photos App is going to load a large list of images we need to use a RecyclerView to manage a large list, for this purpose we used the [RecyclerListView](https://github.com/Flipkart/recyclerlistview) open-source project to handle this issue.
It works as expected for a large list, but there are some challenges to handling the image's animations when we need to change the view type.
## Actual Behavior
The animation starts after a while client pinch the view and the whole of the animation will be done.
https://user-images.githubusercontent.com/16715868/150854559-42acb060-efdb-48b8-8843-8c00588c7be2.mp4
## Expected Behavior?
The client has to be able to control the animation in the middle of the way.
https://user-images.githubusercontent.com/16715868/150853456-9d20a2d3-54bf-4988-b971-35a209d0e586.mov
## Steps to Reproduce
1. Clone the [Master](https://github.com/functionland/photos/tree/master) branch
2. Build the project
3. Pinch the list
## Platform:
Where is this issue occurring?
- [x] iOS
- [x] Android
## Tasks
- [x] The image flickering issue on changing the view is fixed.
- [ ] There is a bug when the user scroll to the end of list and try to change the gridview columns
| 1.0 | Improve the photos list animations (600$ Bounty) (aka gallery pinch/zoom) - ## What issue do we need to solve?
We are going to improve the photos list animations when pinching and zooming (in/out) the list to change the number of columns (View Type).
Since the Photos App is going to load a large list of images we need to use a RecyclerView to manage a large list, for this purpose we used the [RecyclerListView](https://github.com/Flipkart/recyclerlistview) open-source project to handle this issue.
It works as expected for a large list, but there are some challenges to handling the image's animations when we need to change the view type.
## Actual Behavior
The animation starts after a while client pinch the view and the whole of the animation will be done.
https://user-images.githubusercontent.com/16715868/150854559-42acb060-efdb-48b8-8843-8c00588c7be2.mp4
## Expected Behavior?
The client has to be able to control the animation in the middle of the way.
https://user-images.githubusercontent.com/16715868/150853456-9d20a2d3-54bf-4988-b971-35a209d0e586.mov
## Steps to Reproduce
1. Clone the [Master](https://github.com/functionland/photos/tree/master) branch
2. Build the project
3. Pinch the list
## Platform:
Where is this issue occurring?
- [x] iOS
- [x] Android
## Tasks
- [x] The image flickering issue on changing the view is fixed.
- [ ] There is a bug when the user scroll to the end of list and try to change the gridview columns
| priority | improve the photos list animations bounty aka gallery pinch zoom what issue do we need to solve we are going to improve the photos list animations when pinching and zooming in out the list to change the number of columns view type since the photos app is going to load a large list of images we need to use a recyclerview to manage a large list for this purpose we used the open source project to handle this issue it works as expected for a large list but there are some challenges to handling the image s animations when we need to change the view type actual behavior the animation starts after a while client pinch the view and the whole of the animation will be done expected behavior the client has to be able to control the animation in the middle of the way steps to reproduce clone the branch build the project pinch the list platform where is this issue occurring ios android tasks the image flickering issue on changing the view is fixed there is a bug when the user scroll to the end of list and try to change the gridview columns | 1 |
410,300 | 11,986,088,648 | IssuesEvent | 2020-04-07 18:40:21 | Rammelkast/AntiCheatReloaded | https://api.github.com/repos/Rammelkast/AntiCheatReloaded | closed | WaterWalk false positive | false-positive help wanted high priority | WaterWalk gives a false positive when crossing water while holding jump. | 1.0 | WaterWalk false positive - WaterWalk gives a false positive when crossing water while holding jump. | priority | waterwalk false positive waterwalk gives a false positive when crossing water while holding jump | 1 |
554,362 | 16,418,755,306 | IssuesEvent | 2021-05-19 09:57:02 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | regression: crash entering darkroom, c3a2a9e4a is failure-inducing commit | bug: pending priority: high understood: unclear | **Describe the bug/issue**
double clicking an image from the normal lighttable view leads to an instant crash, backtrace attached [darktable_bt_WVHI30.txt](https://github.com/darktable-org/darktable/files/6504594/darktable_bt_WVHI30.txt) and bisect below
**To Reproduce**
1. open darktable without file path, to open the lighttable view
2. double click a raw image, to enter darkroom view
3. observe darktable crash and dump a backtrace
**Expected behavior**
enter darktable view for further processing
**Which commit introduced the error**
@AlicVB and @TurboGit can you take a look? git bisect flags your recent culling commit
```
git bisect start
# bad: [c3a2a9e4a9023af30d63ea36c7f09be6b90796f8] culling : fix oddities when enter/quit dynamic mode
git bisect bad c3a2a9e4a9023af30d63ea36c7f09be6b90796f8
# good: [cf47c7e45f48bb120d0c593e5d15d12b63edb440] Fix fuji dual demosaicing
git bisect good cf47c7e45f48bb120d0c593e5d15d12b63edb440
# good: [b41956aca4756a85b8d78475f11514b61276be68] CI : add LLVM 11
git bisect good b41956aca4756a85b8d78475f11514b61276be68
# good: [48ced170830d842e9fc7b98b8a110c4e5657fede] histogram: slightly increase maximum height
git bisect good 48ced170830d842e9fc7b98b8a110c4e5657fede
# good: [1d7dd25031e31cbe665f9ec76f2431bad8b772c4] update README
git bisect good 1d7dd25031e31cbe665f9ec76f2431bad8b772c4
# good: [0b3bb466cce41e9911d1297c6686d8568c9e149a] filmic: add negative clipping at critical places
git bisect good 0b3bb466cce41e9911d1297c6686d8568c9e149a
# good: [74e939cf24e9fb5767f813e3dcfaf483f6db2290] Update darktable.pot for translators.
git bisect good 74e939cf24e9fb5767f813e3dcfaf483f6db2290
# good: [36a971f94f1df7670bce87446ba01c7c376a79d4] Update de.po
git bisect good 36a971f94f1df7670bce87446ba01c7c376a79d4
# good: [d48e51476a5c82e26f954406721e278af9aa7979] Update French translation.
git bisect good d48e51476a5c82e26f954406721e278af9aa7979
# first bad commit: [c3a2a9e4a9023af30d63ea36c7f09be6b90796f8] culling : fix oddities when enter/quit dynamic mode
```
**Platform**
* darktable version : 3.5.0 whatever Git c3a2a9e4a
* OS : Linux kernel 5.11.20-200.fc33.x86_64
* Linux - Distro : Fedora 33
* Memory : 32 GB
* Graphics card : nvidia GeForce GTX 1060-6GB, opencl enabled
* Graphics driver : nvidia proprietary 465.27
* OpenCL installed : yes
* OpenCL activated : yes
* Xorg :
* Desktop : GNOME
* GTK+ :
* gcc :
* cflags :
* CMAKE_BUILD_TYPE :
| 1.0 | regression: crash entering darkroom, c3a2a9e4a is failure-inducing commit - **Describe the bug/issue**
double clicking an image from the normal lighttable view leads to an instant crash, backtrace attached [darktable_bt_WVHI30.txt](https://github.com/darktable-org/darktable/files/6504594/darktable_bt_WVHI30.txt) and bisect below
**To Reproduce**
1. open darktable without file path, to open the lighttable view
2. double click a raw image, to enter darkroom view
3. observe darktable crash and dump a backtrace
**Expected behavior**
enter darktable view for further processing
**Which commit introduced the error**
@AlicVB and @TurboGit can you take a look? git bisect flags your recent culling commit
```
git bisect start
# bad: [c3a2a9e4a9023af30d63ea36c7f09be6b90796f8] culling : fix oddities when enter/quit dynamic mode
git bisect bad c3a2a9e4a9023af30d63ea36c7f09be6b90796f8
# good: [cf47c7e45f48bb120d0c593e5d15d12b63edb440] Fix fuji dual demosaicing
git bisect good cf47c7e45f48bb120d0c593e5d15d12b63edb440
# good: [b41956aca4756a85b8d78475f11514b61276be68] CI : add LLVM 11
git bisect good b41956aca4756a85b8d78475f11514b61276be68
# good: [48ced170830d842e9fc7b98b8a110c4e5657fede] histogram: slightly increase maximum height
git bisect good 48ced170830d842e9fc7b98b8a110c4e5657fede
# good: [1d7dd25031e31cbe665f9ec76f2431bad8b772c4] update README
git bisect good 1d7dd25031e31cbe665f9ec76f2431bad8b772c4
# good: [0b3bb466cce41e9911d1297c6686d8568c9e149a] filmic: add negative clipping at critical places
git bisect good 0b3bb466cce41e9911d1297c6686d8568c9e149a
# good: [74e939cf24e9fb5767f813e3dcfaf483f6db2290] Update darktable.pot for translators.
git bisect good 74e939cf24e9fb5767f813e3dcfaf483f6db2290
# good: [36a971f94f1df7670bce87446ba01c7c376a79d4] Update de.po
git bisect good 36a971f94f1df7670bce87446ba01c7c376a79d4
# good: [d48e51476a5c82e26f954406721e278af9aa7979] Update French translation.
git bisect good d48e51476a5c82e26f954406721e278af9aa7979
# first bad commit: [c3a2a9e4a9023af30d63ea36c7f09be6b90796f8] culling : fix oddities when enter/quit dynamic mode
```
**Platform**
* darktable version : 3.5.0 whatever Git c3a2a9e4a
* OS : Linux kernel 5.11.20-200.fc33.x86_64
* Linux - Distro : Fedora 33
* Memory : 32 GB
* Graphics card : nvidia GeForce GTX 1060-6GB, opencl enabled
* Graphics driver : nvidia proprietary 465.27
* OpenCL installed : yes
* OpenCL activated : yes
* Xorg :
* Desktop : GNOME
* GTK+ :
* gcc :
* cflags :
* CMAKE_BUILD_TYPE :
| priority | regression crash entering darkroom is failure inducing commit describe the bug issue double clicking an image from the normal lighttable view leads to an instant crash backtrace attached and bisect below to reproduce open darktable without file path to open the lighttable view double click a raw image to enter darkroom view observe darktable crash and dump a backtrace expected behavior enter darktable view for further processing which commit introduced the error alicvb and turbogit can you take a look git bisect flags your recent culling commit git bisect start bad culling fix oddities when enter quit dynamic mode git bisect bad good fix fuji dual demosaicing git bisect good good ci add llvm git bisect good good histogram slightly increase maximum height git bisect good good update readme git bisect good good filmic add negative clipping at critical places git bisect good good update darktable pot for translators git bisect good good update de po git bisect good good update french translation git bisect good first bad commit culling fix oddities when enter quit dynamic mode platform darktable version whatever git os linux kernel linux distro fedora memory gb graphics card nvidia geforce gtx opencl enabled graphics driver nvidia proprietary opencl installed yes opencl activated yes xorg desktop gnome gtk gcc cflags cmake build type | 1 |
701,056 | 24,084,162,657 | IssuesEvent | 2022-09-19 09:24:19 | AY2223S1-CS2103T-W15-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-W15-1/tp | opened | As a student, I can add a tag to a new contact | type.Story priority.High | ... so that I can categorise my contacts.
For example:
tag add n/John Doe t/friend adds the friend tag to John Doe. | 1.0 | As a student, I can add a tag to a new contact - ... so that I can categorise my contacts.
For example:
tag add n/John Doe t/friend adds the friend tag to John Doe. | priority | as a student i can add a tag to a new contact so that i can categorise my contacts for example tag add n john doe t friend adds the friend tag to john doe | 1 |
273,301 | 8,529,020,708 | IssuesEvent | 2018-11-03 06:46:09 | CS2103-AY1819S1-W13-3/main | https://api.github.com/repos/CS2103-AY1819S1-W13-3/main | closed | Modify Person to better represent students | priority.High status.Ongoing type.Task | Currently the Person is a generic representation of usual person information. Ideally it should reflect students and student fields.
Preliminary changes should include:
Person -> Student
Address -> Faculty
Phone number -> student number | 1.0 | Modify Person to better represent students - Currently the Person is a generic representation of usual person information. Ideally it should reflect students and student fields.
Preliminary changes should include:
Person -> Student
Address -> Faculty
Phone number -> student number | priority | modify person to better represent students currently the person is a generic representation of usual person information ideally it should reflect students and student fields preliminary changes should include person student address faculty phone number student number | 1 |
651,431 | 21,478,217,352 | IssuesEvent | 2022-04-26 15:17:56 | knative/docs | https://api.github.com/repos/knative/docs | closed | Knative 1.4 release notes blog post | kind/good-first-issue priority/high | ## Tasks
After Knative 1.4 release day (April 19, 2022):
1. After each component has released, consolidate all release notes into a blog post similar to the [Knative 1.2 blog post](https://knative.dev/blog/releases/announcing-knative-v1-2-release/). Use the [template](https://github.com/knative/docs/blob/main/blog/docs/releases/_template-announcing-knative-v1-X-release.txt) to format the blog post correctly.
2. Add the new blog page to `blog/config/nav.yml`
3. Update the "featured posts" section of `blog/docs/index.md` with the new release.
## Resources
Add the following release notes to the post:
- [ ] https://github.com/knative/serving/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative/eventing/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative-sandbox/eventing-kafka-broker/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative-sandbox/eventing-rabbitmq/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative/client/blob/main/CHANGELOG.adoc#v140-2022-04-19
- [ ] https://github.com/knative/operator/releases/tag/knative-v1.4.0 | 1.0 | Knative 1.4 release notes blog post - ## Tasks
After Knative 1.4 release day (April 19, 2022):
1. After each component has released, consolidate all release notes into a blog post similar to the [Knative 1.2 blog post](https://knative.dev/blog/releases/announcing-knative-v1-2-release/). Use the [template](https://github.com/knative/docs/blob/main/blog/docs/releases/_template-announcing-knative-v1-X-release.txt) to format the blog post correctly.
2. Add the new blog page to `blog/config/nav.yml`
3. Update the "featured posts" section of `blog/docs/index.md` with the new release.
## Resources
Add the following release notes to the post:
- [ ] https://github.com/knative/serving/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative/eventing/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative-sandbox/eventing-kafka-broker/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative-sandbox/eventing-rabbitmq/releases/tag/knative-v1.4.0
- [ ] https://github.com/knative/client/blob/main/CHANGELOG.adoc#v140-2022-04-19
- [ ] https://github.com/knative/operator/releases/tag/knative-v1.4.0 | priority | knative release notes blog post tasks after knative release day april after each component has released consolidate all release notes into a blog post similar to the use the to format the blog post correctly add the new blog page to blog config nav yml update the featured posts section of blog docs index md with the new release resources add the following release notes to the post | 1 |
165,408 | 6,276,193,739 | IssuesEvent | 2017-07-18 08:59:49 | vanilla-framework/vanilla-docs-theme | https://api.github.com/repos/vanilla-framework/vanilla-docs-theme | closed | Development instructions don't work | Priority: High Type: Bug | ## Process
1. Clone repo
2. Run `./run watch`
## Expected result
The project is rebuilt on file change.
## Current result
```
[15:13:24] Starting 'lint:sass'...
[15:13:24] Starting 'sass:build'...
events.js:163
throw er; // Unhandled 'error' event
^
Error: scss/_theme.scss
Error: File to import not found or unreadable: vanilla-framework/scss/build.
Parent style sheet: /Users/luke/Sites/Canonical/vanilla-docs-theme/scss/_theme.scss
on line 2 of scss/_theme.scss
>> @import 'vanilla-framework/scss/build';
^
at options.error (/Users/luke/Sites/Canonical/vanilla-docs-theme/node_modules/node-sass/lib/index.js:291:26)
error Command failed with exit code 1.
```
Changing the path to `../node_modules/vanilla-framework/build`
Results in the following:
```
error Command "watch" not found.
```
| 1.0 | Development instructions don't work - ## Process
1. Clone repo
2. Run `./run watch`
## Expected result
The project is rebuilt on file change.
## Current result
```
[15:13:24] Starting 'lint:sass'...
[15:13:24] Starting 'sass:build'...
events.js:163
throw er; // Unhandled 'error' event
^
Error: scss/_theme.scss
Error: File to import not found or unreadable: vanilla-framework/scss/build.
Parent style sheet: /Users/luke/Sites/Canonical/vanilla-docs-theme/scss/_theme.scss
on line 2 of scss/_theme.scss
>> @import 'vanilla-framework/scss/build';
^
at options.error (/Users/luke/Sites/Canonical/vanilla-docs-theme/node_modules/node-sass/lib/index.js:291:26)
error Command failed with exit code 1.
```
Changing the path to `../node_modules/vanilla-framework/build`
Results in the following:
```
error Command "watch" not found.
```
| priority | development instructions don t work process clone repo run run watch expected result the project is rebuilt on file change current result starting lint sass starting sass build events js throw er unhandled error event error scss theme scss error file to import not found or unreadable vanilla framework scss build parent style sheet users luke sites canonical vanilla docs theme scss theme scss on line of scss theme scss import vanilla framework scss build at options error users luke sites canonical vanilla docs theme node modules node sass lib index js error command failed with exit code changing the path to node modules vanilla framework build results in the following error command watch not found | 1 |
575,435 | 17,031,213,126 | IssuesEvent | 2021-07-04 15:51:07 | jason-nguessan/Buddy | https://api.github.com/repos/jason-nguessan/Buddy | closed | Double text on chat | Priority: High bug | Reproduce: Waiting -> Search -> Chat -> text
Device: IOS/Android
Priority: High | 1.0 | Double text on chat - Reproduce: Waiting -> Search -> Chat -> text
Device: IOS/Android
Priority: High | priority | double text on chat reproduce waiting search chat text device ios android priority high | 1 |
719,945 | 24,774,300,563 | IssuesEvent | 2022-10-23 14:39:27 | AY2223S1-CS2103T-T08-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-T08-4/tp | closed | Unmark Tutorial | type.Story priority.High | As a CS2103T TA, I can unmark tutorials so that if I make a mistake in marking tutorials as complete, I can unmark. | 1.0 | Unmark Tutorial - As a CS2103T TA, I can unmark tutorials so that if I make a mistake in marking tutorials as complete, I can unmark. | priority | unmark tutorial as a ta i can unmark tutorials so that if i make a mistake in marking tutorials as complete i can unmark | 1 |
750,670 | 26,211,783,099 | IssuesEvent | 2023-01-04 07:20:54 | younginnovations/iatipublisher | https://api.github.com/repos/younginnovations/iatipublisher | closed | Validation Issue During New Registration. | type: bug priority: high | 

- [ ] During New Registration there is new validations which has to be looked over.
- [ ] And also even if all the steps are not completed the new registration is already successful. | 1.0 | Validation Issue During New Registration. - 

- [ ] During New Registration there is new validations which has to be looked over.
- [ ] And also even if all the steps are not completed the new registration is already successful. | priority | validation issue during new registration during new registration there is new validations which has to be looked over and also even if all the steps are not completed the new registration is already successful | 1 |
469,477 | 13,509,700,181 | IssuesEvent | 2020-09-14 09:35:20 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | Unable to proceed with "Invite user to set password" option and receiving an error | Priority/Highest Severity/Blocker bug | Affected version: wso2is-5.11.0-m36-SNAPSHOT
**Describe the issue:**
Try creating a new user and selecting the password setting option "Invite User to Set Password", I was able to create the user with this option, but proceed after entering the password after receiving an email confirmation. There is an error and cannot proceed.
**How to reproduce:**
1. Login to Console > Manage
2. Add New User
3. Select the Password option as "Invite User to Set Password"
4. Create a user.
5. Confirm the email conformation and provide passwords and Click on Proceed.
please refer screenshots and error log
```ERROR {org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/accountrecoveryendpoint].[completepasswordreset.do]} - Servlet.service() for servlet [completepasswordreset.do] in context with path [/accountrecoveryendpoint] threw exception [An exception occurred processing [/password-reset-complete.jsp] at line [55]
52: String callback = request.getParameter("callback");
53: String sessionDataKey = request.getParameter("sessionDataKey");
54: String username = request.getParameter("username");
55: boolean isAutoLoginEnable = Boolean.parseBoolean(Utils.getConnectorConfig("Recovery.AutoLogin.Enable",
56: tenantDomain));
57:
58: if (StringUtils.isBlank(callback)) {
Stacktrace:] with root cause java.lang.ArrayIndexOutOfBoundsException: Index 0 out of bounds for length 0
at org.wso2.carbon.identity.recovery.util.Utils.getConnectorConfig(Utils.java:533)
at org.apache.jsp.password_002dreset_002dcomplete_jsp._jspService(password_002dreset_002dcomplete_jsp.java:241)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:71)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:477)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:385)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:329)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:109)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.wso2.carbon.ui.filters.cache.ContentTypeBasedCachePreventionFilter.doFilter(ContentTypeBasedCachePreventionFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:105)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:101)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:834)
```
1. Provided Password

2, Got error message

**Expected behavior:**
Should able to proceed with user creation without errors as expected
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: wso2is-5.11.0-m36-SNAPSHOT
- OS: Mac
- Database: H2
- Userstore: LDAP
- Jdk : 11.0.5
| 1.0 | Unable to proceed with "Invite user to set password" option and receiving an error - Affected version: wso2is-5.11.0-m36-SNAPSHOT
**Describe the issue:**
Try creating a new user and selecting the password setting option "Invite User to Set Password", I was able to create the user with this option, but proceed after entering the password after receiving an email confirmation. There is an error and cannot proceed.
**How to reproduce:**
1. Login to Console > Manage
2. Add New User
3. Select the Password option as "Invite User to Set Password"
4. Create a user.
5. Confirm the email conformation and provide passwords and Click on Proceed.
please refer screenshots and error log
```ERROR {org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/accountrecoveryendpoint].[completepasswordreset.do]} - Servlet.service() for servlet [completepasswordreset.do] in context with path [/accountrecoveryendpoint] threw exception [An exception occurred processing [/password-reset-complete.jsp] at line [55]
52: String callback = request.getParameter("callback");
53: String sessionDataKey = request.getParameter("sessionDataKey");
54: String username = request.getParameter("username");
55: boolean isAutoLoginEnable = Boolean.parseBoolean(Utils.getConnectorConfig("Recovery.AutoLogin.Enable",
56: tenantDomain));
57:
58: if (StringUtils.isBlank(callback)) {
Stacktrace:] with root cause java.lang.ArrayIndexOutOfBoundsException: Index 0 out of bounds for length 0
at org.wso2.carbon.identity.recovery.util.Utils.getConnectorConfig(Utils.java:533)
at org.apache.jsp.password_002dreset_002dcomplete_jsp._jspService(password_002dreset_002dcomplete_jsp.java:241)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:71)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:477)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:385)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:329)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:109)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.wso2.carbon.ui.filters.cache.ContentTypeBasedCachePreventionFilter.doFilter(ContentTypeBasedCachePreventionFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:105)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:101)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:834)
```
1. Provided Password

2, Got error message

**Expected behavior:**
Should able to proceed with user creation without errors as expected
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: wso2is-5.11.0-m36-SNAPSHOT
- OS: Mac
- Database: H2
- Userstore: LDAP
- Jdk : 11.0.5
| priority | unable to proceed with invite user to set password option and receiving an error affected version snapshot describe the issue try creating a new user and selecting the password setting option invite user to set password i was able to create the user with this option but proceed after entering the password after receiving an email confirmation there is an error and cannot proceed how to reproduce login to console manage add new user select the password option as invite user to set password create a user confirm the email conformation and provide passwords and click on proceed please refer screenshots and error log error org apache catalina core containerbase servlet service for servlet in context with path threw exception at line string callback request getparameter callback string sessiondatakey request getparameter sessiondatakey string username request getparameter username boolean isautologinenable boolean parseboolean utils getconnectorconfig recovery autologin enable tenantdomain if stringutils isblank callback stacktrace with root cause java lang arrayindexoutofboundsexception index out of bounds for length at org carbon identity recovery util utils getconnectorconfig utils java at org apache jsp password jsp jspservice password jsp java at org apache jasper runtime httpjspbase service httpjspbase java at javax servlet http httpservlet service httpservlet java at org apache jasper servlet jspservletwrapper service jspservletwrapper java at org apache jasper servlet jspservlet servicejspfile jspservlet java at org apache jasper servlet jspservlet service jspservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters setcharacterencodingfilter dofilter setcharacterencodingfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org carbon ui filters cache contenttypebasedcachepreventionfilter dofilter contenttypebasedcachepreventionfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity cors valve corsvalve invoke corsvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestencodingvalve invoke requestencodingvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java base java lang thread run thread java provided password got error message expected behavior should able to proceed with user creation without errors as expected environment information please complete the following information remove any unnecessary fields product version snapshot os mac database userstore ldap jdk | 1 |
417,124 | 12,155,913,234 | IssuesEvent | 2020-04-25 15:09:56 | deepsourcelabs/brand-assets | https://api.github.com/repos/deepsourcelabs/brand-assets | closed | Update brand images with padding | Priority: High | Add a white padding around the following images:
- Logo white
- Logo white workmark
- Logo regular
- Logo regular wordmark | 1.0 | Update brand images with padding - Add a white padding around the following images:
- Logo white
- Logo white workmark
- Logo regular
- Logo regular wordmark | priority | update brand images with padding add a white padding around the following images logo white logo white workmark logo regular logo regular wordmark | 1 |
37,888 | 2,831,904,185 | IssuesEvent | 2015-05-25 00:58:53 | david415/HoneyBadger | https://api.github.com/repos/david415/HoneyBadger | opened | add PF_RING support | enhancement highest priority performance improvement |
The plan is to have several Data AcQuisition methods for HoneyBadger... PF_RING included among them. | 1.0 | add PF_RING support -
The plan is to have several Data AcQuisition methods for HoneyBadger... PF_RING included among them. | priority | add pf ring support the plan is to have several data acquisition methods for honeybadger pf ring included among them | 1 |
815,045 | 30,534,080,075 | IssuesEvent | 2023-07-19 16:02:20 | awslabs/aws-dataall | https://api.github.com/repos/awslabs/aws-dataall | closed | ECS containers should be limited to read-only access to root filesystems | type: enhancement status: in-review priority: high | ### Describe the bug
The ECS containers deployed by data.all in the Deployment account have both read- and write access to the mounted root filesystem. This generates security alerts in AWS Security Hub and AWS Trusted Advisor.
This involves all 7 Task Definitions deployed by data.all in the Deployment account:
- dataall-dev-catalog-indexer
- dataall-dev-cdkproxy
- dataall-dev-policies-updater
- dataall-dev-share-manager
- dataall-dev-stacks-updater
- dataall-dev-subscriptions
- dataall-dev-tables-syncer
Remediation guidance for this issue has been described here: https://docs.aws.amazon.com/securityhub/latest/userguide/ecs-controls.html#ecs-5

### How to Reproduce
A default deployment of data.all will result in these alerts.
### Expected behavior
Apply read-only access to the root file system on all 7 ECS Task Definitions.
The ReadonlyRootFilesystem parameter in the container definition of the ECS Task Definitions should be set to true.
### Your project
_No response_
### Screenshots
_No response_
### OS
N/A
### Python version
N/A
### AWS data.all version
v1.4.3
### Additional context
_No response_ | 1.0 | ECS containers should be limited to read-only access to root filesystems - ### Describe the bug
The ECS containers deployed by data.all in the Deployment account have both read- and write access to the mounted root filesystem. This generates security alerts in AWS Security Hub and AWS Trusted Advisor.
This involves all 7 Task Definitions deployed by data.all in the Deployment account:
- dataall-dev-catalog-indexer
- dataall-dev-cdkproxy
- dataall-dev-policies-updater
- dataall-dev-share-manager
- dataall-dev-stacks-updater
- dataall-dev-subscriptions
- dataall-dev-tables-syncer
Remediation guidance for this issue has been described here: https://docs.aws.amazon.com/securityhub/latest/userguide/ecs-controls.html#ecs-5

### How to Reproduce
A default deployment of data.all will result in these alerts.
### Expected behavior
Apply read-only access to the root file system on all 7 ECS Task Definitions.
The ReadonlyRootFilesystem parameter in the container definition of the ECS Task Definitions should be set to true.
### Your project
_No response_
### Screenshots
_No response_
### OS
N/A
### Python version
N/A
### AWS data.all version
v1.4.3
### Additional context
_No response_ | priority | ecs containers should be limited to read only access to root filesystems describe the bug the ecs containers deployed by data all in the deployment account have both read and write access to the mounted root filesystem this generates security alerts in aws security hub and aws trusted advisor this involves all task definitions deployed by data all in the deployment account dataall dev catalog indexer dataall dev cdkproxy dataall dev policies updater dataall dev share manager dataall dev stacks updater dataall dev subscriptions dataall dev tables syncer remediation guidance for this issue has been described here how to reproduce a default deployment of data all will result in these alerts expected behavior apply read only access to the root file system on all ecs task definitions the readonlyrootfilesystem parameter in the container definition of the ecs task definitions should be set to true your project no response screenshots no response os n a python version n a aws data all version additional context no response | 1 |
656,814 | 21,776,848,899 | IssuesEvent | 2022-05-13 14:34:26 | HGustavs/LenaSYS | https://api.github.com/repos/HGustavs/LenaSYS | closed | Create tests on the lenasys website | High priority Group-2-2022 | There are many functionality's which need to be tested on lenasys, but during testing we noticed that the already exisitng "dummy" examples are far from ideal, as they miss alot of things we wanted to test. We could create these manually for every time we want to test something, but it would be highly beneficial to have them already in lenasys to make it faster to test.
For example, in the issue #11300 we could not find an example anywhere where the error occurred, as there was no example of it.
We need to create some examples to cover more areas of testing. Every Kind and Wordlist (filetype) would ideally be added. Try to structure it in a way to make it easy to see what code dugga contains what example. | 1.0 | Create tests on the lenasys website - There are many functionality's which need to be tested on lenasys, but during testing we noticed that the already exisitng "dummy" examples are far from ideal, as they miss alot of things we wanted to test. We could create these manually for every time we want to test something, but it would be highly beneficial to have them already in lenasys to make it faster to test.
For example, in the issue #11300 we could not find an example anywhere where the error occurred, as there was no example of it.
We need to create some examples to cover more areas of testing. Every Kind and Wordlist (filetype) would ideally be added. Try to structure it in a way to make it easy to see what code dugga contains what example. | priority | create tests on the lenasys website there are many functionality s which need to be tested on lenasys but during testing we noticed that the already exisitng dummy examples are far from ideal as they miss alot of things we wanted to test we could create these manually for every time we want to test something but it would be highly beneficial to have them already in lenasys to make it faster to test for example in the issue we could not find an example anywhere where the error occurred as there was no example of it we need to create some examples to cover more areas of testing every kind and wordlist filetype would ideally be added try to structure it in a way to make it easy to see what code dugga contains what example | 1 |
178,420 | 6,608,599,681 | IssuesEvent | 2017-09-19 11:40:38 | SacredDuckwhale/TotalAP | https://api.github.com/repos/SacredDuckwhale/TotalAP | closed | Division by zero sometimes occurs in GUI Update functions | module:core module:gui priority:high status:in-progress type:bug | Initially reported [here](https://mods.curse.com/addons/wow/totalap?comment=158) - could not reproduce. Despite the error message, this likely is **not** related to the addon, "Professor" at all. Slightly different error but apparently the same issue was reported shortly afterwards [here](https://mods.curse.com/addons/wow/totalap?comment=163).
- [x] Figure out steps to reproduce (error is caused by / 0 - but why does that happen? -> artifactTier taking on invalid values causes this)
- [x] Add workaround (prevent division by zero - simple enough) -> Set tier = 2 if the returned value is too high
- [ ] Fix whatever is causing the API to return absurd numbers or simply ignore it if is intended behaviour (Netherlight Crucible?)
Maybe something changed in 7.3? Haven't seen the error before and I didn't play (enough) to see for myself. Must... investigate... | 1.0 | Division by zero sometimes occurs in GUI Update functions - Initially reported [here](https://mods.curse.com/addons/wow/totalap?comment=158) - could not reproduce. Despite the error message, this likely is **not** related to the addon, "Professor" at all. Slightly different error but apparently the same issue was reported shortly afterwards [here](https://mods.curse.com/addons/wow/totalap?comment=163).
- [x] Figure out steps to reproduce (error is caused by / 0 - but why does that happen? -> artifactTier taking on invalid values causes this)
- [x] Add workaround (prevent division by zero - simple enough) -> Set tier = 2 if the returned value is too high
- [ ] Fix whatever is causing the API to return absurd numbers or simply ignore it if is intended behaviour (Netherlight Crucible?)
Maybe something changed in 7.3? Haven't seen the error before and I didn't play (enough) to see for myself. Must... investigate... | priority | division by zero sometimes occurs in gui update functions initially reported could not reproduce despite the error message this likely is not related to the addon professor at all slightly different error but apparently the same issue was reported shortly afterwards figure out steps to reproduce error is caused by but why does that happen artifacttier taking on invalid values causes this add workaround prevent division by zero simple enough set tier if the returned value is too high fix whatever is causing the api to return absurd numbers or simply ignore it if is intended behaviour netherlight crucible maybe something changed in haven t seen the error before and i didn t play enough to see for myself must investigate | 1 |
221,000 | 7,372,816,389 | IssuesEvent | 2018-03-13 15:39:32 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Muon Analysis crash with data search directory | Component: Muon Priority: High | If you add the path to your data in the data search directory and then load some of the data from that location into Muon analysis then it will throw an exception when you change to the data analysis tab.
This is because the name has been mangled:
instrument + path
the solution is to remove the instrument if a path is given. | 1.0 | Muon Analysis crash with data search directory - If you add the path to your data in the data search directory and then load some of the data from that location into Muon analysis then it will throw an exception when you change to the data analysis tab.
This is because the name has been mangled:
instrument + path
the solution is to remove the instrument if a path is given. | priority | muon analysis crash with data search directory if you add the path to your data in the data search directory and then load some of the data from that location into muon analysis then it will throw an exception when you change to the data analysis tab this is because the name has been mangled instrument path the solution is to remove the instrument if a path is given | 1 |
484,046 | 13,933,245,325 | IssuesEvent | 2020-10-22 08:26:37 | AY2021S1-CS2103T-T10-3/tp | https://api.github.com/repos/AY2021S1-CS2103T-T10-3/tp | closed | Implement undo/redo | priority.High type.Story | Undo/redo has been implemented for AB3, but needs to be modified to work with our project. Several things to note:
- All commands need to unambigously specify how the model changes (cannot be delegated to other classes), or else the `HistoryManager` will be unable to know what to reverse when undoing.
- Specifically, undoing the completion of recipes is likely to be problematic, since the current idea is to remove the oldest ingredients in the ingredient book when completing a recipe, which doesn't work well with above. | 1.0 | Implement undo/redo - Undo/redo has been implemented for AB3, but needs to be modified to work with our project. Several things to note:
- All commands need to unambigously specify how the model changes (cannot be delegated to other classes), or else the `HistoryManager` will be unable to know what to reverse when undoing.
- Specifically, undoing the completion of recipes is likely to be problematic, since the current idea is to remove the oldest ingredients in the ingredient book when completing a recipe, which doesn't work well with above. | priority | implement undo redo undo redo has been implemented for but needs to be modified to work with our project several things to note all commands need to unambigously specify how the model changes cannot be delegated to other classes or else the historymanager will be unable to know what to reverse when undoing specifically undoing the completion of recipes is likely to be problematic since the current idea is to remove the oldest ingredients in the ingredient book when completing a recipe which doesn t work well with above | 1 |
427,772 | 12,398,994,408 | IssuesEvent | 2020-05-21 03:40:36 | track-basket/trackbasket_BE | https://api.github.com/repos/track-basket/trackbasket_BE | closed | Getting keys for Kroger API | high priority | ## Issue
- [ ] Style (non-breaking style code)
- [ ] Testing (add testing to code)
- [ ] Bug fix (change which fixes an issue)
- [ ] Refactor (improvement to the code)
- [ ] New feature (change which adds functionality)
- [x] Chore
## Issue Summary
:fire: If it is a new feature provide a user story for the feature, describe the feature and the iteration of the project <br />
files impacted: <br />
:beetle: If it is a bug or issue fix, describe the issue that needs fixing.
## Resources
List any resources used when working on the issue (e.g. links to articles, stackoverflow, etc.)
| 1.0 | Getting keys for Kroger API - ## Issue
- [ ] Style (non-breaking style code)
- [ ] Testing (add testing to code)
- [ ] Bug fix (change which fixes an issue)
- [ ] Refactor (improvement to the code)
- [ ] New feature (change which adds functionality)
- [x] Chore
## Issue Summary
:fire: If it is a new feature provide a user story for the feature, describe the feature and the iteration of the project <br />
files impacted: <br />
:beetle: If it is a bug or issue fix, describe the issue that needs fixing.
## Resources
List any resources used when working on the issue (e.g. links to articles, stackoverflow, etc.)
| priority | getting keys for kroger api issue style non breaking style code testing add testing to code bug fix change which fixes an issue refactor improvement to the code new feature change which adds functionality chore issue summary fire if it is a new feature provide a user story for the feature describe the feature and the iteration of the project files impacted beetle if it is a bug or issue fix describe the issue that needs fixing resources list any resources used when working on the issue e g links to articles stackoverflow etc | 1 |
534,190 | 15,611,782,172 | IssuesEvent | 2021-03-19 14:41:11 | sopra-fs21-group-01/client | https://api.github.com/repos/sopra-fs21-group-01/client | opened | #2 Delete closed Lobby | high priority task | When a Lobby is closed, delete the object and ID
Time: 1.5h
Part of #2 | 1.0 | #2 Delete closed Lobby - When a Lobby is closed, delete the object and ID
Time: 1.5h
Part of #2 | priority | delete closed lobby when a lobby is closed delete the object and id time part of | 1 |
430,920 | 12,468,029,053 | IssuesEvent | 2020-05-28 18:06:47 | ChainSafe/gossamer | https://api.github.com/repos/ChainSafe/gossamer | opened | grandpa-ghost: if no blocks have >=2/3 votes, return block with most votes | Priority: 2 - High grandpa | <!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- getPreVotedBlock should return the block with the most votes, if no block has >=2/3 votes
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
- getPreVotedBlock (grandpa-ghost) returns nil if no block has >=2/3 voters
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [ ] I have read [CONTRIBUTING](CONTRIBUTING.md) and [CODE_OF_CONDUCT](CODE_OF_CONDUCT.md)
- [ ] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
<!--- Modified from trufflesuite/ganache -->
| 1.0 | grandpa-ghost: if no blocks have >=2/3 votes, return block with most votes - <!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- getPreVotedBlock should return the block with the most votes, if no block has >=2/3 votes
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
- getPreVotedBlock (grandpa-ghost) returns nil if no block has >=2/3 voters
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [ ] I have read [CONTRIBUTING](CONTRIBUTING.md) and [CODE_OF_CONDUCT](CODE_OF_CONDUCT.md)
- [ ] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
<!--- Modified from trufflesuite/ganache -->
| priority | grandpa ghost if no blocks have votes return block with most votes please read carefully expected behavior if you re describing a bug tell us what should happen if you re suggesting a change improvement tell us how it should work getprevotedblock should return the block with the most votes if no block has votes current behavior if describing a bug tell us what happens instead of the expected behavior if suggesting a change or an improvement explain the difference between your suggestion and current behavior getprevotedblock grandpa ghost returns nil if no block has voters checklist each empty square brackets below is a checkbox replace with to check the box after completing the task i have read contributing md and code of conduct md i have provided as much information as possible and necessary i am planning to submit a pull request to fix this issue myself | 1 |
197,424 | 6,955,052,237 | IssuesEvent | 2017-12-07 05:23:30 | wso2/cloudformation-is | https://api.github.com/repos/wso2/cloudformation-is | closed | Git repository URL in the README.md file is wrong | Priority/High Type/Improvement | **Description:**
It seems like the root README.md file is pointing to the @malithie's personal Git repository:
https://github.com/wso2/cloudformation-is/blob/master/README.md
**Affected Product Version:**
5.3.0, 5.4.0 | 1.0 | Git repository URL in the README.md file is wrong - **Description:**
It seems like the root README.md file is pointing to the @malithie's personal Git repository:
https://github.com/wso2/cloudformation-is/blob/master/README.md
**Affected Product Version:**
5.3.0, 5.4.0 | priority | git repository url in the readme md file is wrong description it seems like the root readme md file is pointing to the malithie s personal git repository affected product version | 1 |
708,158 | 24,332,478,789 | IssuesEvent | 2022-09-30 20:51:29 | Rehachoudhary0/hotel_testing | https://api.github.com/repos/Rehachoudhary0/hotel_testing | closed | 🐛 Bug Report: Hotel Booking page | bug duplicate invalid app front-end (UI/UX) High priority | ### 👟 Reproduction steps
Other hotel shows another hotel booking
### 👍 Expected behavior
Should be shows particular hotel data
### 👎 Actual Behavior
Uploading WhatsApp Video 2022-09-29 at 7.14.30 PM.mp4…
### 🎲 App version
Version 22.09.29+01
### 💻 Operating system
Android
### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Code of Conduct?
- [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md) | 1.0 | 🐛 Bug Report: Hotel Booking page - ### 👟 Reproduction steps
Other hotel shows another hotel booking
### 👍 Expected behavior
Should be shows particular hotel data
### 👎 Actual Behavior
Uploading WhatsApp Video 2022-09-29 at 7.14.30 PM.mp4…
### 🎲 App version
Version 22.09.29+01
### 💻 Operating system
Android
### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Code of Conduct?
- [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md) | priority | 🐛 bug report hotel booking page 👟 reproduction steps other hotel shows another hotel booking 👍 expected behavior should be shows particular hotel data 👎 actual behavior uploading whatsapp video at pm … 🎲 app version version 💻 operating system android 👀 have you spent some time to check if this issue has been raised before i checked and didn t find similar issue 🏢 have you read the code of conduct i have read the | 1 |
739,673 | 25,710,359,113 | IssuesEvent | 2022-12-07 05:56:41 | sorrowcode/taesch- | https://api.github.com/repos/sorrowcode/taesch- | opened | refactoring - Auslagern in einzelne Repositories/Klassen | USP - 8 1 - high priority | Durch immer mehr kommende APIs wird das jetztige Repository immer überfüllter und unübersichtlicher. Zusätzlich dazu verstößt das gegen Clean Code SOLID. Verstöße hierbei sind:
- (S)ingle Responsibility Principle
- die Klasse hat mehr als nur eine Funktionalität als auch Aufgabe
- (O)pen Closed Principle
- die Klasse ist nicht mehr offen für neue Funktionen aber geschlossen zu Modifikationen
- (D)ependency Inversion Principle
- die APIs der jeweiligen Repositories werden direkt angesprochen ohne Abstraktion
-> Definition von geeigneten Interfaces notwendig
Betroffen ist hier:
- APIQuerier, bald QuerierAPI
- zukünftig kommt die Firebase-Anbindung ebenfalls dazu
Hier soll eine passende architektuale Lösung recherchiert und implementiert werden. | 1.0 | refactoring - Auslagern in einzelne Repositories/Klassen - Durch immer mehr kommende APIs wird das jetztige Repository immer überfüllter und unübersichtlicher. Zusätzlich dazu verstößt das gegen Clean Code SOLID. Verstöße hierbei sind:
- (S)ingle Responsibility Principle
- die Klasse hat mehr als nur eine Funktionalität als auch Aufgabe
- (O)pen Closed Principle
- die Klasse ist nicht mehr offen für neue Funktionen aber geschlossen zu Modifikationen
- (D)ependency Inversion Principle
- die APIs der jeweiligen Repositories werden direkt angesprochen ohne Abstraktion
-> Definition von geeigneten Interfaces notwendig
Betroffen ist hier:
- APIQuerier, bald QuerierAPI
- zukünftig kommt die Firebase-Anbindung ebenfalls dazu
Hier soll eine passende architektuale Lösung recherchiert und implementiert werden. | priority | refactoring auslagern in einzelne repositories klassen durch immer mehr kommende apis wird das jetztige repository immer überfüllter und unübersichtlicher zusätzlich dazu verstößt das gegen clean code solid verstöße hierbei sind s ingle responsibility principle die klasse hat mehr als nur eine funktionalität als auch aufgabe o pen closed principle die klasse ist nicht mehr offen für neue funktionen aber geschlossen zu modifikationen d ependency inversion principle die apis der jeweiligen repositories werden direkt angesprochen ohne abstraktion definition von geeigneten interfaces notwendig betroffen ist hier apiquerier bald querierapi zukünftig kommt die firebase anbindung ebenfalls dazu hier soll eine passende architektuale lösung recherchiert und implementiert werden | 1 |
107,109 | 4,289,122,566 | IssuesEvent | 2016-07-17 22:14:10 | rdunlop/unicycling-registration | https://api.github.com/repos/rdunlop/unicycling-registration | closed | Clean up the "Choose Competitors" page | enhancement High Priority | The "Choose Competitors" page should only show the "display_candidates" page for competitions which are "from_lane_assignment" | 1.0 | Clean up the "Choose Competitors" page - The "Choose Competitors" page should only show the "display_candidates" page for competitions which are "from_lane_assignment" | priority | clean up the choose competitors page the choose competitors page should only show the display candidates page for competitions which are from lane assignment | 1 |
96,244 | 3,966,394,054 | IssuesEvent | 2016-05-03 12:50:41 | daronco/test-issue-migrate2 | https://api.github.com/repos/daronco/test-issue-migrate2 | closed | Localization of layout names | Priority: High Status: Resolved Type: Feature | ---
Author Name: **Leonardo Daronco** (@daronco)
Original Redmine Issue: 1571, http://dev.mconf.org/redmine/issues/1571
Original Assignee: Alister Machado
---
In BigBlueButton (and most of the Mconf-Live instances), layout names appear on layouts.xml as keys to a strings in a localization file, such as "bbb.layout.name.defaultlayout". Currently, the gem is getting these names and showing them to the user, as if they were the name of the layout.
The gem should do as BigBlueButton does: get the name of the layouts from layouts.xml and search for this key in the localization files to translate them.
| 1.0 | Localization of layout names - ---
Author Name: **Leonardo Daronco** (@daronco)
Original Redmine Issue: 1571, http://dev.mconf.org/redmine/issues/1571
Original Assignee: Alister Machado
---
In BigBlueButton (and most of the Mconf-Live instances), layout names appear on layouts.xml as keys to a strings in a localization file, such as "bbb.layout.name.defaultlayout". Currently, the gem is getting these names and showing them to the user, as if they were the name of the layout.
The gem should do as BigBlueButton does: get the name of the layouts from layouts.xml and search for this key in the localization files to translate them.
| priority | localization of layout names author name leonardo daronco daronco original redmine issue original assignee alister machado in bigbluebutton and most of the mconf live instances layout names appear on layouts xml as keys to a strings in a localization file such as bbb layout name defaultlayout currently the gem is getting these names and showing them to the user as if they were the name of the layout the gem should do as bigbluebutton does get the name of the layouts from layouts xml and search for this key in the localization files to translate them | 1 |
692,555 | 23,739,919,214 | IssuesEvent | 2022-08-31 11:31:05 | qoretechnologies/qorus-vscode | https://api.github.com/repos/qoretechnologies/qorus-vscode | closed | [BUG] cannot set mapper options | bug high-priority | trying to set a mapper option , for example `output_provider_upsert` (none of the options I tried to set worked), does not work - the option is not added to the form, and then the user cannot remove the options field or save the mapper.
Note that also clicking on "Discard unsaved changes" does not result in the invalid / empty options entry getting removed in this case. | 1.0 | [BUG] cannot set mapper options - trying to set a mapper option , for example `output_provider_upsert` (none of the options I tried to set worked), does not work - the option is not added to the form, and then the user cannot remove the options field or save the mapper.
Note that also clicking on "Discard unsaved changes" does not result in the invalid / empty options entry getting removed in this case. | priority | cannot set mapper options trying to set a mapper option for example output provider upsert none of the options i tried to set worked does not work the option is not added to the form and then the user cannot remove the options field or save the mapper note that also clicking on discard unsaved changes does not result in the invalid empty options entry getting removed in this case | 1 |
813,757 | 30,470,692,460 | IssuesEvent | 2023-07-17 13:28:07 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-2173] /embed: weekly view - header is broken and I can't move to next week | 🐛 bug High priority embed | The position of the fixed day header is broken in embeds and the top header showing the date and allowing me to move forward in the month is missing.
[https://github.com/calcom/cal.com/assets/4536123/fd7d5bca-e73e-453d-93c7-8388b20ccc94](https://github.com/calcom/cal.com/assets/4536123/fd7d5bca-e73e-453d-93c7-8388b20ccc94)
Steps to reproduce:
* Embed an event type in a website using the "floating pop-up button" option
* Click the "book a call" button when on the website.
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-2173](https://linear.app/calcom/issue/CAL-2173/embed-weekly-view-header-is-broken-and-i-cant-move-to-next-week)</sub> | 1.0 | [CAL-2173] /embed: weekly view - header is broken and I can't move to next week - The position of the fixed day header is broken in embeds and the top header showing the date and allowing me to move forward in the month is missing.
[https://github.com/calcom/cal.com/assets/4536123/fd7d5bca-e73e-453d-93c7-8388b20ccc94](https://github.com/calcom/cal.com/assets/4536123/fd7d5bca-e73e-453d-93c7-8388b20ccc94)
Steps to reproduce:
* Embed an event type in a website using the "floating pop-up button" option
* Click the "book a call" button when on the website.
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-2173](https://linear.app/calcom/issue/CAL-2173/embed-weekly-view-header-is-broken-and-i-cant-move-to-next-week)</sub> | priority | embed weekly view header is broken and i can t move to next week the position of the fixed day header is broken in embeds and the top header showing the date and allowing me to move forward in the month is missing steps to reproduce embed an event type in a website using the floating pop up button option click the book a call button when on the website from | 1 |
484,108 | 13,934,473,882 | IssuesEvent | 2020-10-22 10:04:49 | adorsys/open-banking-gateway | https://api.github.com/repos/adorsys/open-banking-gateway | opened | Create Bank and Bank action API | BE high priority | Create API and endpoints that allow to add Bank and specify actions available for that bank. The endpoint should be protected with BasicAuth. Also, add endpoint that allows to delete bank and bank action | 1.0 | Create Bank and Bank action API - Create API and endpoints that allow to add Bank and specify actions available for that bank. The endpoint should be protected with BasicAuth. Also, add endpoint that allows to delete bank and bank action | priority | create bank and bank action api create api and endpoints that allow to add bank and specify actions available for that bank the endpoint should be protected with basicauth also add endpoint that allows to delete bank and bank action | 1 |
558,891 | 16,544,148,629 | IssuesEvent | 2021-05-27 21:03:44 | BricksVR/bricksvr-issue-tracking | https://api.github.com/repos/BricksVR/bricksvr-issue-tracking | closed | Menus take priority over bricks | feature request high priority | So when you open a menu, in the current build, if there are blocks in between you and the menu the blocks get in the way of the menu. I feel like it would be really useful if the menu took priority over that to make it easier to build in small spaces or use settings etc. | 1.0 | Menus take priority over bricks - So when you open a menu, in the current build, if there are blocks in between you and the menu the blocks get in the way of the menu. I feel like it would be really useful if the menu took priority over that to make it easier to build in small spaces or use settings etc. | priority | menus take priority over bricks so when you open a menu in the current build if there are blocks in between you and the menu the blocks get in the way of the menu i feel like it would be really useful if the menu took priority over that to make it easier to build in small spaces or use settings etc | 1 |
256,828 | 8,129,486,622 | IssuesEvent | 2018-08-17 15:12:46 | UrbanCCD-UChicago/plenario2 | https://api.github.com/repos/UrbanCCD-UChicago/plenario2 | closed | We Need Better AoT Controller Tests | API Bug: High Priority | I could have sworn I redid these, but oh well. They look kinda anemic compared to the other controllers.
## What happened?
Nothing really. Just some sad tests.
## What did you expect?
More comprehensive tests of the AoT endpoint and its directives.
## Are there any specific error messages?
Nope. | 1.0 | We Need Better AoT Controller Tests - I could have sworn I redid these, but oh well. They look kinda anemic compared to the other controllers.
## What happened?
Nothing really. Just some sad tests.
## What did you expect?
More comprehensive tests of the AoT endpoint and its directives.
## Are there any specific error messages?
Nope. | priority | we need better aot controller tests i could have sworn i redid these but oh well they look kinda anemic compared to the other controllers what happened nothing really just some sad tests what did you expect more comprehensive tests of the aot endpoint and its directives are there any specific error messages nope | 1 |
415,166 | 12,125,669,038 | IssuesEvent | 2020-04-22 15:52:09 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | Collection relation conflation very slow with Differential | Category: Algorithms Priority: High Status: In Progress Type: Maintenance | reported by Patrick; disabling collection relation for the job allowed it completely very quickly | 1.0 | Collection relation conflation very slow with Differential - reported by Patrick; disabling collection relation for the job allowed it completely very quickly | priority | collection relation conflation very slow with differential reported by patrick disabling collection relation for the job allowed it completely very quickly | 1 |
220,608 | 7,368,993,313 | IssuesEvent | 2018-03-13 00:03:55 | RPGHacker/asar | https://api.github.com/repos/RPGHacker/asar | closed | Add a filesize() and a fileexists() function | TODO new feature priority: high | Add a filesize() function which lets us query the size of a file (to use in conjunction with readfile() functions). | 1.0 | Add a filesize() and a fileexists() function - Add a filesize() function which lets us query the size of a file (to use in conjunction with readfile() functions). | priority | add a filesize and a fileexists function add a filesize function which lets us query the size of a file to use in conjunction with readfile functions | 1 |
205,790 | 7,106,001,584 | IssuesEvent | 2018-01-16 15:24:00 | cilium/cilium | https://api.github.com/repos/cilium/cilium | opened | Add validation schema version for CNP CRD | area/k8s kind/enhancement priority/high project/1.0-gap | Add version schema to detect if new schema is more recent than the one stored on the running cluster.
Version can be stored on the CRD labels | 1.0 | Add validation schema version for CNP CRD - Add version schema to detect if new schema is more recent than the one stored on the running cluster.
Version can be stored on the CRD labels | priority | add validation schema version for cnp crd add version schema to detect if new schema is more recent than the one stored on the running cluster version can be stored on the crd labels | 1 |
289,728 | 8,875,674,160 | IssuesEvent | 2019-01-12 06:38:37 | mono/monodevelop | https://api.github.com/repos/mono/monodevelop | closed | Focusing out/into VisualStudio changes the default focused element on the UI | Area: Shell high-priority papercut vs-sync | # Repro steps
Make sure you have a second app running, like a browser that can be tabbed into
Make sure the text editor is open and focused, type "class Foo"
Press Command-Tab to switch focus from the IDE to the browser
Press Command-Tab to switch back to the IDE
Attempt to type: you will notice that the editor is no longer focused, and the input goes somewhere else.
# Scenario
This happens a lot when I am porting code, and I have to switch from VSMac to other tools (Safari, Emacs, Terminal).
I provided an attachment (large) for internal use on an older version of this bug (as I think I did not file it properly):
> VS bug [#754535](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/754535) | 1.0 | Focusing out/into VisualStudio changes the default focused element on the UI - # Repro steps
Make sure you have a second app running, like a browser that can be tabbed into
Make sure the text editor is open and focused, type "class Foo"
Press Command-Tab to switch focus from the IDE to the browser
Press Command-Tab to switch back to the IDE
Attempt to type: you will notice that the editor is no longer focused, and the input goes somewhere else.
# Scenario
This happens a lot when I am porting code, and I have to switch from VSMac to other tools (Safari, Emacs, Terminal).
I provided an attachment (large) for internal use on an older version of this bug (as I think I did not file it properly):
> VS bug [#754535](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/754535) | priority | focusing out into visualstudio changes the default focused element on the ui repro steps make sure you have a second app running like a browser that can be tabbed into make sure the text editor is open and focused type class foo press command tab to switch focus from the ide to the browser press command tab to switch back to the ide attempt to type you will notice that the editor is no longer focused and the input goes somewhere else scenario this happens a lot when i am porting code and i have to switch from vsmac to other tools safari emacs terminal i provided an attachment large for internal use on an older version of this bug as i think i did not file it properly vs bug | 1 |
332,547 | 10,097,676,259 | IssuesEvent | 2019-07-28 08:25:45 | social-dist0rtion-protocol/planet-a | https://api.github.com/repos/social-dist0rtion-protocol/planet-a | closed | Display the global CO2 value and make the CO2 "run-away point" configurable | high priority | ## Scope
<img width="966" alt="Screen Shot 2019-07-16 at 14 08 47" src="https://user-images.githubusercontent.com/2758453/61293115-40496280-a7d3-11e9-864f-541f1f19ffd3.png">
- For every trade that occurs in the game, an amount of CO2 is emitted into the "Air"
- With proceeding pollution of the "Air", living conditions become tougher
- A "run-away" point or "point of no return" is a situation that occurs when there has been so much CO2 emitted, that planing plants/trees becomes less efficient to lessen the CO2 impact
- In this sense, a "run-away" point marks the beginning of a potential death spiral
## Deliverables
<img width="615" alt="Screen Shot 2019-07-16 at 14 13 11" src="https://user-images.githubusercontent.com/2758453/61293391-de3d2d00-a7d3-11e9-9ae4-5da2f59c4ad9.png">
- Read the global CO2 value from all the unspents and display its value in a [dramatic](https://www.mcc-berlin.net/de/forschung/co2-budget.html) way in the UI. Ideally on the main view App.js
- Note that for the purpose of testing the game at the Ethereum Meetup, a complicated time-series data plot/chart is **not** required! It's something we can build for ETHBerlinZwei of course. As a countdown requires similar functionality, we recommend **not** implementing that either (too much work for now).
## Roles
bounty gardener: @TimDaub / 10%
bounty worker: name / 75%
bounty reviewer: name / 15% | 1.0 | Display the global CO2 value and make the CO2 "run-away point" configurable - ## Scope
<img width="966" alt="Screen Shot 2019-07-16 at 14 08 47" src="https://user-images.githubusercontent.com/2758453/61293115-40496280-a7d3-11e9-864f-541f1f19ffd3.png">
- For every trade that occurs in the game, an amount of CO2 is emitted into the "Air"
- With proceeding pollution of the "Air", living conditions become tougher
- A "run-away" point or "point of no return" is a situation that occurs when there has been so much CO2 emitted, that planing plants/trees becomes less efficient to lessen the CO2 impact
- In this sense, a "run-away" point marks the beginning of a potential death spiral
## Deliverables
<img width="615" alt="Screen Shot 2019-07-16 at 14 13 11" src="https://user-images.githubusercontent.com/2758453/61293391-de3d2d00-a7d3-11e9-9ae4-5da2f59c4ad9.png">
- Read the global CO2 value from all the unspents and display its value in a [dramatic](https://www.mcc-berlin.net/de/forschung/co2-budget.html) way in the UI. Ideally on the main view App.js
- Note that for the purpose of testing the game at the Ethereum Meetup, a complicated time-series data plot/chart is **not** required! It's something we can build for ETHBerlinZwei of course. As a countdown requires similar functionality, we recommend **not** implementing that either (too much work for now).
## Roles
bounty gardener: @TimDaub / 10%
bounty worker: name / 75%
bounty reviewer: name / 15% | priority | display the global value and make the run away point configurable scope img width alt screen shot at src for every trade that occurs in the game an amount of is emitted into the air with proceeding pollution of the air living conditions become tougher a run away point or point of no return is a situation that occurs when there has been so much emitted that planing plants trees becomes less efficient to lessen the impact in this sense a run away point marks the beginning of a potential death spiral deliverables img width alt screen shot at src read the global value from all the unspents and display its value in a way in the ui ideally on the main view app js note that for the purpose of testing the game at the ethereum meetup a complicated time series data plot chart is not required it s something we can build for ethberlinzwei of course as a countdown requires similar functionality we recommend not implementing that either too much work for now roles bounty gardener timdaub bounty worker name bounty reviewer name | 1 |
289,921 | 8,880,404,357 | IssuesEvent | 2019-01-14 05:51:26 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | ValueError: filedescriptor out of range in select() | SEV2-high area/analytics area/analytics/ingestion env/prod priority/P2 team/analytics type/bug | From sentry: https://errortracking.prod-preview.openshift.io/openshift_io/fabric8-analytics-production/issues/6249/
```
ValueError: filedescriptor out of range in select()
File "celery/app/trace.py", line 375, in trace_task
R = retval = fun(*args, **kwargs)
File "celery/app/trace.py", line 632, in __protected_call__
return self.run(*args, **kwargs)
File "selinon/task_envelope.py", line 169, in run
raise self.retry(max_retries=0, exc=exc)
File "celery/app/task.py", line 668, in retry
raise_with_context(exc)
File "selinon/task_envelope.py", line 114, in run
result = task.run(node_args)
File "f8a_worker/base.py", line 106, in run
raise exc
File "f8a_worker/base.py", line 81, in run
result = self.execute(node_args)
File "f8a_worker/workers/init_analysis_flow.py", line 70, in execute
epv_cache.put_source_tarball(source_tarball_path)
File "f8a_worker/object_cache.py", line 162, in put_source_tarball
self._put_meta(os.path.basename(source_tarball_path))
File "f8a_worker/object_cache.py", line 107, in _put_meta
self._s3.store_dict(tarball_name, self._meta_json_object_key)
File "f8a_worker/storages/s3.py", line 208, in store_dict
return self.store_blob(blob, object_key)
File "f8a_worker/storages/s3.py", line 191, in store_blob
response = self._s3.Object(self.bucket_name, object_key).put(**put_kwargs)
File "boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "botocore/client.py", line 599, in _make_api_call
operation_model, request_dict)
File "botocore/endpoint.py", line 148, in make_request
return self._send_request(request_dict, operation_model)
File "botocore/endpoint.py", line 177, in _send_request
success_response, exception):
File "botocore/endpoint.py", line 273, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "botocore/retryhandler.py", line 269, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "botocore/endpoint.py", line 222, in _get_response
proxies=self.proxies, timeout=self.timeout)
File "botocore/vendored/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "botocore/vendored/requests/adapters.py", line 370, in send
timeout=timeout
File "botocore/vendored/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "botocore/vendored/requests/packages/urllib3/connectionpool.py", line 349, in _make_request
conn.request(method, url, **httplib_request_kw)
File "http/client.py", line 1137, in request
self._send_request(method, url, body, headers)
File "botocore/awsrequest.py", line 130, in _send_request
self, method, url, body, headers, *args, **kwargs)
File "http/client.py", line 1182, in _send_request
self.endheaders(body)
File "http/client.py", line 1133, in endheaders
self._send_output(message_body)
File "botocore/awsrequest.py", line 163, in _send_output
read, write, exc = select.select([self.sock], [], [self.sock], 1)
``` | 1.0 | ValueError: filedescriptor out of range in select() - From sentry: https://errortracking.prod-preview.openshift.io/openshift_io/fabric8-analytics-production/issues/6249/
```
ValueError: filedescriptor out of range in select()
File "celery/app/trace.py", line 375, in trace_task
R = retval = fun(*args, **kwargs)
File "celery/app/trace.py", line 632, in __protected_call__
return self.run(*args, **kwargs)
File "selinon/task_envelope.py", line 169, in run
raise self.retry(max_retries=0, exc=exc)
File "celery/app/task.py", line 668, in retry
raise_with_context(exc)
File "selinon/task_envelope.py", line 114, in run
result = task.run(node_args)
File "f8a_worker/base.py", line 106, in run
raise exc
File "f8a_worker/base.py", line 81, in run
result = self.execute(node_args)
File "f8a_worker/workers/init_analysis_flow.py", line 70, in execute
epv_cache.put_source_tarball(source_tarball_path)
File "f8a_worker/object_cache.py", line 162, in put_source_tarball
self._put_meta(os.path.basename(source_tarball_path))
File "f8a_worker/object_cache.py", line 107, in _put_meta
self._s3.store_dict(tarball_name, self._meta_json_object_key)
File "f8a_worker/storages/s3.py", line 208, in store_dict
return self.store_blob(blob, object_key)
File "f8a_worker/storages/s3.py", line 191, in store_blob
response = self._s3.Object(self.bucket_name, object_key).put(**put_kwargs)
File "boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "botocore/client.py", line 599, in _make_api_call
operation_model, request_dict)
File "botocore/endpoint.py", line 148, in make_request
return self._send_request(request_dict, operation_model)
File "botocore/endpoint.py", line 177, in _send_request
success_response, exception):
File "botocore/endpoint.py", line 273, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "botocore/retryhandler.py", line 269, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "botocore/endpoint.py", line 222, in _get_response
proxies=self.proxies, timeout=self.timeout)
File "botocore/vendored/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "botocore/vendored/requests/adapters.py", line 370, in send
timeout=timeout
File "botocore/vendored/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "botocore/vendored/requests/packages/urllib3/connectionpool.py", line 349, in _make_request
conn.request(method, url, **httplib_request_kw)
File "http/client.py", line 1137, in request
self._send_request(method, url, body, headers)
File "botocore/awsrequest.py", line 130, in _send_request
self, method, url, body, headers, *args, **kwargs)
File "http/client.py", line 1182, in _send_request
self.endheaders(body)
File "http/client.py", line 1133, in endheaders
self._send_output(message_body)
File "botocore/awsrequest.py", line 163, in _send_output
read, write, exc = select.select([self.sock], [], [self.sock], 1)
``` | priority | valueerror filedescriptor out of range in select from sentry valueerror filedescriptor out of range in select file celery app trace py line in trace task r retval fun args kwargs file celery app trace py line in protected call return self run args kwargs file selinon task envelope py line in run raise self retry max retries exc exc file celery app task py line in retry raise with context exc file selinon task envelope py line in run result task run node args file worker base py line in run raise exc file worker base py line in run result self execute node args file worker workers init analysis flow py line in execute epv cache put source tarball source tarball path file worker object cache py line in put source tarball self put meta os path basename source tarball path file worker object cache py line in put meta self store dict tarball name self meta json object key file worker storages py line in store dict return self store blob blob object key file worker storages py line in store blob response self object self bucket name object key put put kwargs file resources factory py line in do action response action self args kwargs file resources action py line in call response getattr parent meta client operation name params file botocore client py line in api call return self make api call operation name kwargs file botocore client py line in make api call operation model request dict file botocore endpoint py line in make request return self send request request dict operation model file botocore endpoint py line in send request success response exception file botocore endpoint py line in needs retry caught exception caught exception request dict request dict file botocore hooks py line in emit return self emit event name kwargs file botocore hooks py line in emit response handler kwargs file botocore retryhandler py line in call if self checker attempts response caught exception file botocore retryhandler py line in call caught exception file botocore retryhandler py line in should retry return self checker attempt number response caught exception file botocore retryhandler py line in call caught exception file botocore retryhandler py line in call attempt number caught exception file botocore retryhandler py line in check caught exception raise caught exception file botocore endpoint py line in get response proxies self proxies timeout self timeout file botocore vendored requests sessions py line in send r adapter send request kwargs file botocore vendored requests adapters py line in send timeout timeout file botocore vendored requests packages connectionpool py line in urlopen body body headers headers file botocore vendored requests packages connectionpool py line in make request conn request method url httplib request kw file http client py line in request self send request method url body headers file botocore awsrequest py line in send request self method url body headers args kwargs file http client py line in send request self endheaders body file http client py line in endheaders self send output message body file botocore awsrequest py line in send output read write exc select select | 1 |
202,154 | 7,044,639,325 | IssuesEvent | 2018-01-01 06:08:18 | magfest/ubersystem | https://api.github.com/repos/magfest/ubersystem | closed | Refresh View Recent At-the-Door Registrations After Mark as Paid | priority:high | Opened per conversation with Rob & Vicki:
At Labs 2017, when on the View Recent At-the-Door Registrations, whenever we clicked "Mark as Paid" we still had to refresh the page to enter in the badge number. Please add in a page refresh when "Mark as Paid" is checked. | 1.0 | Refresh View Recent At-the-Door Registrations After Mark as Paid - Opened per conversation with Rob & Vicki:
At Labs 2017, when on the View Recent At-the-Door Registrations, whenever we clicked "Mark as Paid" we still had to refresh the page to enter in the badge number. Please add in a page refresh when "Mark as Paid" is checked. | priority | refresh view recent at the door registrations after mark as paid opened per conversation with rob vicki at labs when on the view recent at the door registrations whenever we clicked mark as paid we still had to refresh the page to enter in the badge number please add in a page refresh when mark as paid is checked | 1 |
397,841 | 11,733,764,868 | IssuesEvent | 2020-03-11 07:56:46 | mintproject/mint-ui-lit | https://api.github.com/repos/mintproject/mint-ui-lit | closed | Cosmetic issues in Model Catalog Explorer | high priority | - In model setups, add the link to DockerHub.
- Instead of "HELP" call the button "Documentation"
- Change the text: "The MINT model browser allows you to learn about the different models included in MINT. Each model can have separate configurations, each representing a unique set up of that model (particular choices of processes, regions, etc). Each configuration can have separate setups that provide different default values for files and parameters.
In the search bar below you can search models in two ways, which you can choose on the right. One is to search their descriptions using a model name, type (e.g., agriculture), keyword (fertilizer), and areas (e.g. Pongo). Another is to search their variables (e.g., rainfall)."
to be
"The MINT model browser allows you to learn about the different models included in MINT. Each model can have separate configurations, each representing a unique functionality of that model (particular choices of processes, regions, etc). Each configuration can have separate setups that provide different default values for files and parameters.
In the search bar below you can search models in two ways, which you can choose on the right. One is to search their descriptions using a model name, type (e.g., agriculture), keyword (fertilizer), and areas (e.g. Pongo). Another is to search their variables (e.g., rainfall)." | 1.0 | Cosmetic issues in Model Catalog Explorer - - In model setups, add the link to DockerHub.
- Instead of "HELP" call the button "Documentation"
- Change the text: "The MINT model browser allows you to learn about the different models included in MINT. Each model can have separate configurations, each representing a unique set up of that model (particular choices of processes, regions, etc). Each configuration can have separate setups that provide different default values for files and parameters.
In the search bar below you can search models in two ways, which you can choose on the right. One is to search their descriptions using a model name, type (e.g., agriculture), keyword (fertilizer), and areas (e.g. Pongo). Another is to search their variables (e.g., rainfall)."
to be
"The MINT model browser allows you to learn about the different models included in MINT. Each model can have separate configurations, each representing a unique functionality of that model (particular choices of processes, regions, etc). Each configuration can have separate setups that provide different default values for files and parameters.
In the search bar below you can search models in two ways, which you can choose on the right. One is to search their descriptions using a model name, type (e.g., agriculture), keyword (fertilizer), and areas (e.g. Pongo). Another is to search their variables (e.g., rainfall)." | priority | cosmetic issues in model catalog explorer in model setups add the link to dockerhub instead of help call the button documentation change the text the mint model browser allows you to learn about the different models included in mint each model can have separate configurations each representing a unique set up of that model particular choices of processes regions etc each configuration can have separate setups that provide different default values for files and parameters in the search bar below you can search models in two ways which you can choose on the right one is to search their descriptions using a model name type e g agriculture keyword fertilizer and areas e g pongo another is to search their variables e g rainfall to be the mint model browser allows you to learn about the different models included in mint each model can have separate configurations each representing a unique functionality of that model particular choices of processes regions etc each configuration can have separate setups that provide different default values for files and parameters in the search bar below you can search models in two ways which you can choose on the right one is to search their descriptions using a model name type e g agriculture keyword fertilizer and areas e g pongo another is to search their variables e g rainfall | 1 |
697,745 | 23,951,651,824 | IssuesEvent | 2022-09-12 12:02:56 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | v12: EAP TLS Windows provisioning issue | Type: Bug Priority: High |
After authenticating on the captive portal to download the windows agent to get the EAP TLS certs.
````
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) INFO: [mac:78:4f:43:a0:75:c7] Instantiate profile ZamOpen (pf::Connection::ProfileFact
ory::_from_profile)
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) INFO: [mac:78:4f:43:a0:75:c7] Found provisioner Windows for 78:4f:43:a0:75:c7 (captive
portal::PacketFence::DynamicRouting::Module::Provisioning::execute_child)
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) INFO: [mac:78:4f:43:a0:75:c7] Request to /api/v1/pki/certs is unauthorized, will perfo
rm a login (pf::api::unifiedapiclient::call)
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) WARN: [mac:78:4f:43:a0:75:c7] Certificate creation failed (pf::pki_provider::packetfen
ce_pki::get_bundle)
Aug 31 15:40:03 cluster2 pfqueue[447154]: pfqueue(447154) WARN: [mac:78:4f:43:a0:75:c7] Unable to pull accounting history for device 78:4f:43:a0:75:c7. The history s
et doesn't exist yet. (pf::accounting_events_history::latest_mac_history)
```` | 1.0 | v12: EAP TLS Windows provisioning issue -
After authenticating on the captive portal to download the windows agent to get the EAP TLS certs.
````
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) INFO: [mac:78:4f:43:a0:75:c7] Instantiate profile ZamOpen (pf::Connection::ProfileFact
ory::_from_profile)
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) INFO: [mac:78:4f:43:a0:75:c7] Found provisioner Windows for 78:4f:43:a0:75:c7 (captive
portal::PacketFence::DynamicRouting::Module::Provisioning::execute_child)
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) INFO: [mac:78:4f:43:a0:75:c7] Request to /api/v1/pki/certs is unauthorized, will perfo
rm a login (pf::api::unifiedapiclient::call)
Aug 31 15:40:02 cluster2 httpd.portal-docker-wrapper[452755]: httpd.portal(13) WARN: [mac:78:4f:43:a0:75:c7] Certificate creation failed (pf::pki_provider::packetfen
ce_pki::get_bundle)
Aug 31 15:40:03 cluster2 pfqueue[447154]: pfqueue(447154) WARN: [mac:78:4f:43:a0:75:c7] Unable to pull accounting history for device 78:4f:43:a0:75:c7. The history s
et doesn't exist yet. (pf::accounting_events_history::latest_mac_history)
```` | priority | eap tls windows provisioning issue after authenticating on the captive portal to download the windows agent to get the eap tls certs aug httpd portal docker wrapper httpd portal info instantiate profile zamopen pf connection profilefact ory from profile aug httpd portal docker wrapper httpd portal info found provisioner windows for captive portal packetfence dynamicrouting module provisioning execute child aug httpd portal docker wrapper httpd portal info request to api pki certs is unauthorized will perfo rm a login pf api unifiedapiclient call aug httpd portal docker wrapper httpd portal warn certificate creation failed pf pki provider packetfen ce pki get bundle aug pfqueue pfqueue warn unable to pull accounting history for device the history s et doesn t exist yet pf accounting events history latest mac history | 1 |
359,920 | 10,682,508,930 | IssuesEvent | 2019-10-22 05:42:31 | h2oai/datatable | https://api.github.com/repos/h2oai/datatable | closed | dt.isna() doesn't properly operate on int/float columns obtained from join | High priority bug join | Note: it works fine with string columns
```
>>> from datatable import dt, join, f
>>> DT = dt.Frame(A=[3, 4, 1, 5])
>>> J = dt.Frame(A=range(4), B=list('abcd'), C=[3.3]*4, D=[1,2,4,0])
>>> J.key = 'A'
>>> R = DT[:, :, join(J)]
>>> R[:, dt.isna(f[:])]
| A B C D
---+ -- -- -- --
0 | 0 0 0 0
1 | 0 1 0 0
2 | 0 0 0 0
3 | 0 1 0 0
[4 rows x 4 columns]
```
expected behavior: C and D columns same as B.
Thanks to @jangorecki for discovering the issue. | 1.0 | dt.isna() doesn't properly operate on int/float columns obtained from join - Note: it works fine with string columns
```
>>> from datatable import dt, join, f
>>> DT = dt.Frame(A=[3, 4, 1, 5])
>>> J = dt.Frame(A=range(4), B=list('abcd'), C=[3.3]*4, D=[1,2,4,0])
>>> J.key = 'A'
>>> R = DT[:, :, join(J)]
>>> R[:, dt.isna(f[:])]
| A B C D
---+ -- -- -- --
0 | 0 0 0 0
1 | 0 1 0 0
2 | 0 0 0 0
3 | 0 1 0 0
[4 rows x 4 columns]
```
expected behavior: C and D columns same as B.
Thanks to @jangorecki for discovering the issue. | priority | dt isna doesn t properly operate on int float columns obtained from join note it works fine with string columns from datatable import dt join f dt dt frame a j dt frame a range b list abcd c d j key a r dt r a b c d expected behavior c and d columns same as b thanks to jangorecki for discovering the issue | 1 |
251,946 | 8,029,781,261 | IssuesEvent | 2018-07-27 17:13:42 | GregariousAlex/eleva | https://api.github.com/repos/GregariousAlex/eleva | opened | Elevator display system | enhancement high priority | Make a system for displaying the elevator setup with the new storage system. | 1.0 | Elevator display system - Make a system for displaying the elevator setup with the new storage system. | priority | elevator display system make a system for displaying the elevator setup with the new storage system | 1 |
462,186 | 13,242,683,322 | IssuesEvent | 2020-08-19 10:08:33 | Uninett/Argus | https://api.github.com/repos/Uninett/Argus | closed | Sending notifications to SMS | data model discussion frontend priority: high | We will use the 3rd party app django-phonenumber-field to store a phone number for a user.
Currently only one, on the User. OneToOneField or directly?
We will need an endpoint to add/edit/delete the phone number, and add/edit/delete an existing phone number to a notification profile. What else? | 1.0 | Sending notifications to SMS - We will use the 3rd party app django-phonenumber-field to store a phone number for a user.
Currently only one, on the User. OneToOneField or directly?
We will need an endpoint to add/edit/delete the phone number, and add/edit/delete an existing phone number to a notification profile. What else? | priority | sending notifications to sms we will use the party app django phonenumber field to store a phone number for a user currently only one on the user onetoonefield or directly we will need an endpoint to add edit delete the phone number and add edit delete an existing phone number to a notification profile what else | 1 |
194,732 | 6,898,184,059 | IssuesEvent | 2017-11-24 08:22:53 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | closed | [Source view] Increase the contrast of comments as current color shade is unreadable against the black background | Priority/High | $subject please.

| 1.0 | [Source view] Increase the contrast of comments as current color shade is unreadable against the black background - $subject please.

| priority | increase the contrast of comments as current color shade is unreadable against the black background subject please | 1 |
548,539 | 16,066,494,643 | IssuesEvent | 2021-04-23 20:01:35 | flexpool/frontend | https://api.github.com/repos/flexpool/frontend | opened | Tips On Reducing Fees button does nothing | priority:high type:backlog | It can be a tooltip that says the higher the payout limit, the fewer fees you pay.
 | 1.0 | Tips On Reducing Fees button does nothing - It can be a tooltip that says the higher the payout limit, the fewer fees you pay.
 | priority | tips on reducing fees button does nothing it can be a tooltip that says the higher the payout limit the fewer fees you pay | 1 |
566,910 | 16,833,853,949 | IssuesEvent | 2021-06-18 09:17:54 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Import completion doesn't work for lang modules after dot | Priority/High SwanLakeDump Team/LanguageServer Type/Bug Version/SLAlpha3 | **Description:**
Import completion dropdown disappear as soon as you type `.`
https://user-images.githubusercontent.com/1686124/112156317-26009280-8c0c-11eb-99ec-3622428af288.mov
<!--
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc.
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 1.0 | Import completion doesn't work for lang modules after dot - **Description:**
Import completion dropdown disappear as soon as you type `.`
https://user-images.githubusercontent.com/1686124/112156317-26009280-8c0c-11eb-99ec-3622428af288.mov
<!--
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc.
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| priority | import completion doesn t work for lang modules after dot description import completion dropdown disappear as soon as you type steps to reproduce affected versions os db other environment details and versions related issues optional any related issues such as sub tasks issues reported in other repositories e g component repositories similar problems etc suggested labels optional optional comma separated list of suggested labels non committers can’t assign labels to issues so this will help issue creators who are not a committer to suggest possible labels suggested assignees optional | 1 |
549,185 | 16,087,428,764 | IssuesEvent | 2021-04-26 13:01:27 | sopra-fs21-group-24/server | https://api.github.com/repos/sopra-fs21-group-24/server | closed | Implement Gameservice method createGame | high priority | - should set gamemode, usermode and gameCreator
- returns game | 1.0 | Implement Gameservice method createGame - - should set gamemode, usermode and gameCreator
- returns game | priority | implement gameservice method creategame should set gamemode usermode and gamecreator returns game | 1 |
198,525 | 6,973,927,780 | IssuesEvent | 2017-12-11 22:18:47 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | 6.1.1 games are 'freezing' and CPU usage is stuck - no log written | High Priority | Since 6.1.1 we are seeing a number of servers that game play started lagging out very badly so that payers in game could do nothing. Server was reporting a high CPU use of a number between 30-50% and stayed stuck on that number within 2-3 percent. Players who were not logged in could not log in - either they get stuck at entering world or at about 8% loading world. Servers are not writing a crash report or any exception logs. Restarting the server will let people in but the same thing starts happening withing a few minutes.
Server opted to create fresh worlds in case it was an issue with a 6.03 world but within a short time on a fresh world it happened again. It happened so soon that no debris or much building had taken place so I dont think its related to waste or debris. I am posting this so the various admins can post world to it that are doing this | 1.0 | 6.1.1 games are 'freezing' and CPU usage is stuck - no log written - Since 6.1.1 we are seeing a number of servers that game play started lagging out very badly so that payers in game could do nothing. Server was reporting a high CPU use of a number between 30-50% and stayed stuck on that number within 2-3 percent. Players who were not logged in could not log in - either they get stuck at entering world or at about 8% loading world. Servers are not writing a crash report or any exception logs. Restarting the server will let people in but the same thing starts happening withing a few minutes.
Server opted to create fresh worlds in case it was an issue with a 6.03 world but within a short time on a fresh world it happened again. It happened so soon that no debris or much building had taken place so I dont think its related to waste or debris. I am posting this so the various admins can post world to it that are doing this | priority | games are freezing and cpu usage is stuck no log written since we are seeing a number of servers that game play started lagging out very badly so that payers in game could do nothing server was reporting a high cpu use of a number between and stayed stuck on that number within percent players who were not logged in could not log in either they get stuck at entering world or at about loading world servers are not writing a crash report or any exception logs restarting the server will let people in but the same thing starts happening withing a few minutes server opted to create fresh worlds in case it was an issue with a world but within a short time on a fresh world it happened again it happened so soon that no debris or much building had taken place so i dont think its related to waste or debris i am posting this so the various admins can post world to it that are doing this | 1 |
77,916 | 3,507,544,025 | IssuesEvent | 2016-01-08 13:56:16 | AtomicGameEngine/AtomicGameEngine | https://api.github.com/repos/AtomicGameEngine/AtomicGameEngine | opened | Shader cache error using installed Editor (Windows Play Mode) | difficulty: 2 priority: high type: bug | Shader cache files are attempting to save to the installation folder:
[Fri Jan 8 05:52:57 2016] ERROR: Could not open file C:/Program Files/Atomic Editor/AtomicEditor/Resources/CoreData/Shaders/HLSL/Cache/Atomic2D_00000000.vs3
[Fri Jan 8 05:52:57 2016] DEBUG: Compiled pixel shader Atomic2D()
[Fri Jan 8 05:52:57 2016] ERROR: Could not open file C:/Program Files/Atomic Editor/AtomicEditor/Resources/CoreData/Shaders/HLSL/Cache/Atomic2D_00000000.ps3 | 1.0 | Shader cache error using installed Editor (Windows Play Mode) - Shader cache files are attempting to save to the installation folder:
[Fri Jan 8 05:52:57 2016] ERROR: Could not open file C:/Program Files/Atomic Editor/AtomicEditor/Resources/CoreData/Shaders/HLSL/Cache/Atomic2D_00000000.vs3
[Fri Jan 8 05:52:57 2016] DEBUG: Compiled pixel shader Atomic2D()
[Fri Jan 8 05:52:57 2016] ERROR: Could not open file C:/Program Files/Atomic Editor/AtomicEditor/Resources/CoreData/Shaders/HLSL/Cache/Atomic2D_00000000.ps3 | priority | shader cache error using installed editor windows play mode shader cache files are attempting to save to the installation folder error could not open file c program files atomic editor atomiceditor resources coredata shaders hlsl cache debug compiled pixel shader error could not open file c program files atomic editor atomiceditor resources coredata shaders hlsl cache | 1 |
296,835 | 9,126,425,688 | IssuesEvent | 2019-02-24 21:26:31 | on3iro/aeons-end-randomizer | https://api.github.com/repos/on3iro/aeons-end-randomizer | closed | Fix styling, make responsive and beautify | Priority: High enhancement good first issue | Multiple commits and Merge Requests possible, before we merge this. We should create some subtasks | 1.0 | Fix styling, make responsive and beautify - Multiple commits and Merge Requests possible, before we merge this. We should create some subtasks | priority | fix styling make responsive and beautify multiple commits and merge requests possible before we merge this we should create some subtasks | 1 |
212,308 | 7,235,639,877 | IssuesEvent | 2018-02-13 01:50:27 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | closed | ConfigurationPropertySources fails to resolve correctly with parent context | priority: high type: bug | Attaching a `ConfigurationPropertySources` in a parent/child `ApplicationContext` setup currently doesn't work correctly. The child will merge property sources from the parent, including the adapter. Caching then gets messed up. | 1.0 | ConfigurationPropertySources fails to resolve correctly with parent context - Attaching a `ConfigurationPropertySources` in a parent/child `ApplicationContext` setup currently doesn't work correctly. The child will merge property sources from the parent, including the adapter. Caching then gets messed up. | priority | configurationpropertysources fails to resolve correctly with parent context attaching a configurationpropertysources in a parent child applicationcontext setup currently doesn t work correctly the child will merge property sources from the parent including the adapter caching then gets messed up | 1 |
312,609 | 9,549,959,184 | IssuesEvent | 2019-05-02 10:40:03 | muskankhedia/Jarvis-Desktop | https://api.github.com/repos/muskankhedia/Jarvis-Desktop | opened | [improve] port all jarvis-personal-assistant features | enhancement high priority | **Is your feature request related to a problem? Please describe.**
porting all features of [Jarvis-personal-assistant web](https://github.com/Harkishen-Singh/Jarvis-personal-assistant) into Jarvis-Desktop
**Describe the solution you'd like**
direct as done in the web version
**Describe alternatives you've considered**
none
**Additional context**
none | 1.0 | [improve] port all jarvis-personal-assistant features - **Is your feature request related to a problem? Please describe.**
porting all features of [Jarvis-personal-assistant web](https://github.com/Harkishen-Singh/Jarvis-personal-assistant) into Jarvis-Desktop
**Describe the solution you'd like**
direct as done in the web version
**Describe alternatives you've considered**
none
**Additional context**
none | priority | port all jarvis personal assistant features is your feature request related to a problem please describe porting all features of into jarvis desktop describe the solution you d like direct as done in the web version describe alternatives you ve considered none additional context none | 1 |
244,182 | 7,871,621,045 | IssuesEvent | 2018-06-25 08:32:06 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | opened | Error 500 /api/user/roles | epic/Auth kind/bug priority/high status/todo | ```
"phoenix": {
"request_id": "53ff408c-5b7b-4c4d-9a58-bab1d42294a5#61656",
"message": "** (Protocol.UndefinedError) protocol String.Chars not implemented for %{\"0\" => \"bf6201dc-a979-449f-8563-79717aa13db9\", \"1\" => \"bf6201dc-a979-449f-8563-79717aa13db9\"}. This protocol is implemented for: Atom, BitString, Date, DateTime, Decimal, Ecto.Date, Ecto.DateTime, Ecto.Time, Float, Geo.GeometryCollection, Geo.LineString, Geo.MultiLineString, Geo.MultiPoint, Geo.MultiPolygon, Geo.Point, Geo.PointM, Geo.PointZ, Geo.PointZM, Geo.Polygon, Integer, List, NaiveDateTime, Postgrex.Copy, Postgrex.Query, Postgrex.Stream, Time, URI, Version, Version.Requirement\n (elixir) /home/buildozer/aports/community/elixir/src/elixir-1.5.2/lib/elixir/lib/string/chars.ex:3: String.Chars.impl_for!/1\n (elixir) /home/buildozer/aports/community/elixir/src/elixir-1.5.2/lib/elixir/lib/string/chars.ex:22: String.Chars.to_string/1\n (elixir) lib/uri.ex:105: URI.encode_kv_pair/1\n (elixir) lib/enum.ex:1340: anonymous fn/4 in Enum.map_join/3\n (stdlib) lists.erl:1263: :lists.foldl/3\n (elixir) lib/enum.ex:1843: Enum.map_join/3\n (ehealth) lib/ehealth/api/mithril.ex:9: EHealth.API.Mithril.request/5\n (ehealth) lib/ehealth/api/mithril.ex:7: EHealth.API.Mithril.\"request! (overridable 1)\"/5\n",
"log_type": "error"
},
"@timestamp": "2018-06-25T05:05:32.773585826+00:00",
"tag": "phoenix.error"
``` | 1.0 | Error 500 /api/user/roles - ```
"phoenix": {
"request_id": "53ff408c-5b7b-4c4d-9a58-bab1d42294a5#61656",
"message": "** (Protocol.UndefinedError) protocol String.Chars not implemented for %{\"0\" => \"bf6201dc-a979-449f-8563-79717aa13db9\", \"1\" => \"bf6201dc-a979-449f-8563-79717aa13db9\"}. This protocol is implemented for: Atom, BitString, Date, DateTime, Decimal, Ecto.Date, Ecto.DateTime, Ecto.Time, Float, Geo.GeometryCollection, Geo.LineString, Geo.MultiLineString, Geo.MultiPoint, Geo.MultiPolygon, Geo.Point, Geo.PointM, Geo.PointZ, Geo.PointZM, Geo.Polygon, Integer, List, NaiveDateTime, Postgrex.Copy, Postgrex.Query, Postgrex.Stream, Time, URI, Version, Version.Requirement\n (elixir) /home/buildozer/aports/community/elixir/src/elixir-1.5.2/lib/elixir/lib/string/chars.ex:3: String.Chars.impl_for!/1\n (elixir) /home/buildozer/aports/community/elixir/src/elixir-1.5.2/lib/elixir/lib/string/chars.ex:22: String.Chars.to_string/1\n (elixir) lib/uri.ex:105: URI.encode_kv_pair/1\n (elixir) lib/enum.ex:1340: anonymous fn/4 in Enum.map_join/3\n (stdlib) lists.erl:1263: :lists.foldl/3\n (elixir) lib/enum.ex:1843: Enum.map_join/3\n (ehealth) lib/ehealth/api/mithril.ex:9: EHealth.API.Mithril.request/5\n (ehealth) lib/ehealth/api/mithril.ex:7: EHealth.API.Mithril.\"request! (overridable 1)\"/5\n",
"log_type": "error"
},
"@timestamp": "2018-06-25T05:05:32.773585826+00:00",
"tag": "phoenix.error"
``` | priority | error api user roles phoenix request id message protocol undefinederror protocol string chars not implemented for this protocol is implemented for atom bitstring date datetime decimal ecto date ecto datetime ecto time float geo geometrycollection geo linestring geo multilinestring geo multipoint geo multipolygon geo point geo pointm geo pointz geo pointzm geo polygon integer list naivedatetime postgrex copy postgrex query postgrex stream time uri version version requirement n elixir home buildozer aports community elixir src elixir lib elixir lib string chars ex string chars impl for n elixir home buildozer aports community elixir src elixir lib elixir lib string chars ex string chars to string n elixir lib uri ex uri encode kv pair n elixir lib enum ex anonymous fn in enum map join n stdlib lists erl lists foldl n elixir lib enum ex enum map join n ehealth lib ehealth api mithril ex ehealth api mithril request n ehealth lib ehealth api mithril ex ehealth api mithril request overridable n log type error timestamp tag phoenix error | 1 |
364,258 | 10,761,312,324 | IssuesEvent | 2019-10-31 20:30:21 | metrumresearchgroup/babylon | https://api.github.com/repos/metrumresearchgroup/babylon | opened | run_heuristics does not contain default values | bug priority: high risk: low | Summary
The run_heuristics structure should have a value of HeuristicUndefined for any values not found. | 1.0 | run_heuristics does not contain default values - Summary
The run_heuristics structure should have a value of HeuristicUndefined for any values not found. | priority | run heuristics does not contain default values summary the run heuristics structure should have a value of heuristicundefined for any values not found | 1 |
540,506 | 15,812,711,781 | IssuesEvent | 2021-04-05 06:14:41 | wso2/product-mi-tooling | https://api.github.com/repos/wso2/product-mi-tooling | closed | Limit the Size of the Request Body | Priority/High Severity/Major Type/Improvement mi dashboard 2.0 | **Description:**
<!-- Give a brief description of the issue -->
Accepting request bodies with unnecessarily large sizes could help attackers to use less connections to achieve Layer 7 DDoS of the webserver. Therefore, we need to limit the size of the request body to each form's requirements. For example, a search form with a 256-char search field should not accept more than 1KB value.
As a suggestion, we can set "org.eclipse.jetty.server.Request.maxFormContentSize" property in the jetty server to limit the size of the request body. For more info: https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | 1.0 | Limit the Size of the Request Body - **Description:**
<!-- Give a brief description of the issue -->
Accepting request bodies with unnecessarily large sizes could help attackers to use less connections to achieve Layer 7 DDoS of the webserver. Therefore, we need to limit the size of the request body to each form's requirements. For example, a search form with a 256-char search field should not accept more than 1KB value.
As a suggestion, we can set "org.eclipse.jetty.server.Request.maxFormContentSize" property in the jetty server to limit the size of the request body. For more info: https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | priority | limit the size of the request body description accepting request bodies with unnecessarily large sizes could help attackers to use less connections to achieve layer ddos of the webserver therefore we need to limit the size of the request body to each form s requirements for example a search form with a char search field should not accept more than value as a suggestion we can set org eclipse jetty server request maxformcontentsize property in the jetty server to limit the size of the request body for more info suggested labels suggested assignees affected product version os db other environment details and versions steps to reproduce related issues | 1 |
393,838 | 11,625,421,857 | IssuesEvent | 2020-02-27 12:39:41 | HE-Arc/CSRuby | https://api.github.com/repos/HE-Arc/CSRuby | closed | Création de la relation many to many entre user et item | core high priority | User_Item
- user_item_id
- email
- item id
- buy created at
- sell created at
- buy item
- sell item
- favourite item | 1.0 | Création de la relation many to many entre user et item - User_Item
- user_item_id
- email
- item id
- buy created at
- sell created at
- buy item
- sell item
- favourite item | priority | création de la relation many to many entre user et item user item user item id email item id buy created at sell created at buy item sell item favourite item | 1 |
559,733 | 16,575,151,972 | IssuesEvent | 2021-05-31 02:47:12 | StatisticsNZ/simplevis | https://api.github.com/repos/StatisticsNZ/simplevis | closed | Warning: font family not found in Windows font database | high priority | Warning message where Helvetica is selected | 1.0 | Warning: font family not found in Windows font database - Warning message where Helvetica is selected | priority | warning font family not found in windows font database warning message where helvetica is selected | 1 |
525,655 | 15,257,699,834 | IssuesEvent | 2021-02-21 02:46:59 | tysonkaufmann/su-go | https://api.github.com/repos/tysonkaufmann/su-go | closed | [DEV] Update the Auth Schema to use User schema Instead and Remove User Statistics | High Priority task | **Description**
Move the `password` field from the `Auth` collection to the `User` collection
Delete the user statistics fields from the `User` collection
Update code and tests for the Login Endpoint
Update code and tests for the Signup Endpoint
Update code and tests for the Reset Password Endpoint
| 1.0 | [DEV] Update the Auth Schema to use User schema Instead and Remove User Statistics - **Description**
Move the `password` field from the `Auth` collection to the `User` collection
Delete the user statistics fields from the `User` collection
Update code and tests for the Login Endpoint
Update code and tests for the Signup Endpoint
Update code and tests for the Reset Password Endpoint
| priority | update the auth schema to use user schema instead and remove user statistics description move the password field from the auth collection to the user collection delete the user statistics fields from the user collection update code and tests for the login endpoint update code and tests for the signup endpoint update code and tests for the reset password endpoint | 1 |
648,341 | 21,183,373,650 | IssuesEvent | 2022-04-08 10:07:58 | Apicurio/apicurio-registry | https://api.github.com/repos/Apicurio/apicurio-registry | closed | NoSuchElementException on concurrent requests to RegistryClientFactory.create | Bug component/registry priority/high | Hi,
I have JUnit tests running in parallel and these tests are using `org.springframework.kafka.annotation.KafkaListener` annotation.
Sometimes, I get the exception below.
I might be wrong but from what I understand, the method `io.apicurio.registry.rest.client.RegistryClientFactory#resolveProviderInstance` is not thread-safe and concurrent requests to `io.apicurio.registry.rest.client.RegistryClientFactory#create(java.lang.String, java.util.Map<java.lang.String,java.lang.Object>, io.apicurio.rest.client.auth.Auth)` on init might cause `NoSuchElementException`.
```
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.0.jar:?]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.9.jar:2.7.9]
at io.opentracing.contrib.kafka.spring.TracingConsumerFactory.createConsumer(TracingConsumerFactory.java:97) ~[opentracing-kafka-spring-0.1.15.jar:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:717) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:320) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:205) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:327) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.registerListenerContainer(KafkaListenerEndpointRegistry.java:203) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.config.KafkaListenerEndpointRegistrar.registerEndpoint(KafkaListenerEndpointRegistrar.java:238) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor.processListener(KafkaListenerAnnotationBeanPostProcessor.java:565) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor.processKafkaListener(KafkaListenerAnnotationBeanPostProcessor.java:459) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor.postProcessAfterInitialization(KafkaListenerAnnotationBeanPostProcessor.java:361) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:455) ~[spring-beans-5.3.13.jar:5.3.13]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1808) ~[spring-beans-5.3.13.jar:5.3.13]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:431) ~[spring-beans-5.3.13.jar:5.3.13]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:120) ~[spring-test-5.3.13.jar:5.3.13]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:83) ~[spring-test-5.3.13.jar:5.3.13]
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:248) ~[spring-test-5.3.13.jar:5.3.13]
at org.springframework.test.context.junit.jupiter.SpringExtension.postProcessTestInstance(SpringExtension.java:138) ~[spring-test-5.3.13.jar:5.3.13]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$8(ClassBasedTestDescriptor.java:363) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.executeAndMaskThrowable(ClassBasedTestDescriptor.java:368) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$9(ClassBasedTestDescriptor.java:363) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312) ~[?:?]
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735) ~[?:?]
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) ~[?:?]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) ~[?:?]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestInstancePostProcessors(ClassBasedTestDescriptor.java:362) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$instantiateAndPostProcessTestInstance$6(ClassBasedTestDescriptor.java:283) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.instantiateAndPostProcessTestInstance(ClassBasedTestDescriptor.java:282) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$4(ClassBasedTestDescriptor.java:272) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at java.util.Optional.orElseGet(Optional.java:369) ~[?:?]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$5(ClassBasedTestDescriptor.java:271) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.execution.TestInstancesProvider.getTestInstances(TestInstancesProvider.java:31) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$prepare$0(TestMethodTestDescriptor.java:102) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:101) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:66) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$prepare$2(NodeTestTask.java:123) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.prepare(NodeTestTask.java:123) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:90) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) ~[?:?]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) ~[?:?]
at java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:396) ~[?:?]
at java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:721) ~[?:?]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.joinConcurrentTasksInReverseOrderToEnableWorkStealing(ForkJoinPoolHierarchicalTestExecutorService.java:162) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:136) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) ~[?:?]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) ~[?:?]
at java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:396) ~[?:?]
at java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:721) ~[?:?]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.joinConcurrentTasksInReverseOrderToEnableWorkStealing(ForkJoinPoolHierarchicalTestExecutorService.java:162) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:136) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) [junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) [junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) [junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185) [junit-platform-engine-1.8.1.jar:1.8.1]
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) [?:?]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]
Caused by: java.lang.IllegalStateException: java.util.NoSuchElementException
at io.apicurio.registry.serde.AbstractSchemaResolver.configure(AbstractSchemaResolver.java:86) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.DefaultSchemaResolver.configure(DefaultSchemaResolver.java:56) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.SchemaResolverConfigurer.configure(SchemaResolverConfigurer.java:75) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaSerDe.configure(AbstractKafkaSerDe.java:68) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaDeserializer.configure(AbstractKafkaDeserializer.java:62) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.avro.AvroKafkaDeserializer.configure(AvroKafkaDeserializer.java:68) ~[apicurio-registry-serdes-avro-serde-2.1.3.Final.jar:?]
at org.springframework.kafka.support.serializer.ErrorHandlingDeserializer.configure(ErrorHandlingDeserializer.java:134) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:714) ~[kafka-clients-2.7.0.jar:?]
... 89 more
Caused by: java.util.NoSuchElementException
at java.lang.CompoundEnumeration.nextElement(ClassLoader.java:3046) ~[?:?]
at java.util.ServiceLoader$LazyClassPathLookupIterator.nextProviderClass(ServiceLoader.java:1206) ~[?:?]
at java.util.ServiceLoader$LazyClassPathLookupIterator.hasNextService(ServiceLoader.java:1221) ~[?:?]
at java.util.ServiceLoader$LazyClassPathLookupIterator.hasNext(ServiceLoader.java:1265) ~[?:?]
at java.util.ServiceLoader$2.next(ServiceLoader.java:1306) ~[?:?]
at java.util.ServiceLoader$2.next(ServiceLoader.java:1297) ~[?:?]
at java.util.ServiceLoader$3.next(ServiceLoader.java:1395) ~[?:?]
at io.apicurio.registry.rest.client.RegistryClientFactory.resolveProviderInstance(RegistryClientFactory.java:105) ~[apicurio-registry-client-2.1.3.Final.jar:?]
at io.apicurio.registry.rest.client.RegistryClientFactory.create(RegistryClientFactory.java:77) ~[apicurio-registry-client-2.1.3.Final.jar:?]
at io.apicurio.registry.rest.client.RegistryClientFactory.create(RegistryClientFactory.java:71) ~[apicurio-registry-client-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractSchemaResolver.configure(AbstractSchemaResolver.java:82) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.DefaultSchemaResolver.configure(DefaultSchemaResolver.java:56) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.SchemaResolverConfigurer.configure(SchemaResolverConfigurer.java:75) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaSerDe.configure(AbstractKafkaSerDe.java:68) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaDeserializer.configure(AbstractKafkaDeserializer.java:62) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.avro.AvroKafkaDeserializer.configure(AvroKafkaDeserializer.java:68) ~[apicurio-registry-serdes-avro-serde-2.1.3.Final.jar:?]
at org.springframework.kafka.support.serializer.ErrorHandlingDeserializer.configure(ErrorHandlingDeserializer.java:134) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:714) ~[kafka-clients-2.7.0.jar:?]
... 89 more
``` | 1.0 | NoSuchElementException on concurrent requests to RegistryClientFactory.create - Hi,
I have JUnit tests running in parallel and these tests are using `org.springframework.kafka.annotation.KafkaListener` annotation.
Sometimes, I get the exception below.
I might be wrong but from what I understand, the method `io.apicurio.registry.rest.client.RegistryClientFactory#resolveProviderInstance` is not thread-safe and concurrent requests to `io.apicurio.registry.rest.client.RegistryClientFactory#create(java.lang.String, java.util.Map<java.lang.String,java.lang.Object>, io.apicurio.rest.client.auth.Auth)` on init might cause `NoSuchElementException`.
```
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.0.jar:?]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.9.jar:2.7.9]
at io.opentracing.contrib.kafka.spring.TracingConsumerFactory.createConsumer(TracingConsumerFactory.java:97) ~[opentracing-kafka-spring-0.1.15.jar:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:717) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:320) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:205) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:327) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.registerListenerContainer(KafkaListenerEndpointRegistry.java:203) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.config.KafkaListenerEndpointRegistrar.registerEndpoint(KafkaListenerEndpointRegistrar.java:238) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor.processListener(KafkaListenerAnnotationBeanPostProcessor.java:565) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor.processKafkaListener(KafkaListenerAnnotationBeanPostProcessor.java:459) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor.postProcessAfterInitialization(KafkaListenerAnnotationBeanPostProcessor.java:361) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:455) ~[spring-beans-5.3.13.jar:5.3.13]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1808) ~[spring-beans-5.3.13.jar:5.3.13]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:431) ~[spring-beans-5.3.13.jar:5.3.13]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:120) ~[spring-test-5.3.13.jar:5.3.13]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:83) ~[spring-test-5.3.13.jar:5.3.13]
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:248) ~[spring-test-5.3.13.jar:5.3.13]
at org.springframework.test.context.junit.jupiter.SpringExtension.postProcessTestInstance(SpringExtension.java:138) ~[spring-test-5.3.13.jar:5.3.13]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$8(ClassBasedTestDescriptor.java:363) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.executeAndMaskThrowable(ClassBasedTestDescriptor.java:368) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$9(ClassBasedTestDescriptor.java:363) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312) ~[?:?]
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735) ~[?:?]
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) ~[?:?]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) ~[?:?]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestInstancePostProcessors(ClassBasedTestDescriptor.java:362) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$instantiateAndPostProcessTestInstance$6(ClassBasedTestDescriptor.java:283) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.instantiateAndPostProcessTestInstance(ClassBasedTestDescriptor.java:282) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$4(ClassBasedTestDescriptor.java:272) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at java.util.Optional.orElseGet(Optional.java:369) ~[?:?]
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$5(ClassBasedTestDescriptor.java:271) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.execution.TestInstancesProvider.getTestInstances(TestInstancesProvider.java:31) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$prepare$0(TestMethodTestDescriptor.java:102) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:101) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:66) ~[junit-jupiter-engine-5.8.1.jar:5.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$prepare$2(NodeTestTask.java:123) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.prepare(NodeTestTask.java:123) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:90) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) ~[?:?]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) ~[?:?]
at java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:396) ~[?:?]
at java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:721) ~[?:?]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.joinConcurrentTasksInReverseOrderToEnableWorkStealing(ForkJoinPoolHierarchicalTestExecutorService.java:162) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:136) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) ~[?:?]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) ~[?:?]
at java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:396) ~[?:?]
at java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:721) ~[?:?]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.joinConcurrentTasksInReverseOrderToEnableWorkStealing(ForkJoinPoolHierarchicalTestExecutorService.java:162) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:136) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) ~[junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) [junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) [junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) [junit-platform-engine-1.8.1.jar:1.8.1]
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185) [junit-platform-engine-1.8.1.jar:1.8.1]
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) [?:?]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]
Caused by: java.lang.IllegalStateException: java.util.NoSuchElementException
at io.apicurio.registry.serde.AbstractSchemaResolver.configure(AbstractSchemaResolver.java:86) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.DefaultSchemaResolver.configure(DefaultSchemaResolver.java:56) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.SchemaResolverConfigurer.configure(SchemaResolverConfigurer.java:75) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaSerDe.configure(AbstractKafkaSerDe.java:68) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaDeserializer.configure(AbstractKafkaDeserializer.java:62) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.avro.AvroKafkaDeserializer.configure(AvroKafkaDeserializer.java:68) ~[apicurio-registry-serdes-avro-serde-2.1.3.Final.jar:?]
at org.springframework.kafka.support.serializer.ErrorHandlingDeserializer.configure(ErrorHandlingDeserializer.java:134) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:714) ~[kafka-clients-2.7.0.jar:?]
... 89 more
Caused by: java.util.NoSuchElementException
at java.lang.CompoundEnumeration.nextElement(ClassLoader.java:3046) ~[?:?]
at java.util.ServiceLoader$LazyClassPathLookupIterator.nextProviderClass(ServiceLoader.java:1206) ~[?:?]
at java.util.ServiceLoader$LazyClassPathLookupIterator.hasNextService(ServiceLoader.java:1221) ~[?:?]
at java.util.ServiceLoader$LazyClassPathLookupIterator.hasNext(ServiceLoader.java:1265) ~[?:?]
at java.util.ServiceLoader$2.next(ServiceLoader.java:1306) ~[?:?]
at java.util.ServiceLoader$2.next(ServiceLoader.java:1297) ~[?:?]
at java.util.ServiceLoader$3.next(ServiceLoader.java:1395) ~[?:?]
at io.apicurio.registry.rest.client.RegistryClientFactory.resolveProviderInstance(RegistryClientFactory.java:105) ~[apicurio-registry-client-2.1.3.Final.jar:?]
at io.apicurio.registry.rest.client.RegistryClientFactory.create(RegistryClientFactory.java:77) ~[apicurio-registry-client-2.1.3.Final.jar:?]
at io.apicurio.registry.rest.client.RegistryClientFactory.create(RegistryClientFactory.java:71) ~[apicurio-registry-client-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractSchemaResolver.configure(AbstractSchemaResolver.java:82) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.DefaultSchemaResolver.configure(DefaultSchemaResolver.java:56) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.SchemaResolverConfigurer.configure(SchemaResolverConfigurer.java:75) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaSerDe.configure(AbstractKafkaSerDe.java:68) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.AbstractKafkaDeserializer.configure(AbstractKafkaDeserializer.java:62) ~[apicurio-registry-serde-common-2.1.3.Final.jar:?]
at io.apicurio.registry.serde.avro.AvroKafkaDeserializer.configure(AvroKafkaDeserializer.java:68) ~[apicurio-registry-serdes-avro-serde-2.1.3.Final.jar:?]
at org.springframework.kafka.support.serializer.ErrorHandlingDeserializer.configure(ErrorHandlingDeserializer.java:134) ~[spring-kafka-2.7.9.jar:2.7.9]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:714) ~[kafka-clients-2.7.0.jar:?]
... 89 more
``` | priority | nosuchelementexception on concurrent requests to registryclientfactory create hi i have junit tests running in parallel and these tests are using org springframework kafka annotation kafkalistener annotation sometimes i get the exception below i might be wrong but from what i understand the method io apicurio registry rest client registryclientfactory resolveproviderinstance is not thread safe and concurrent requests to io apicurio registry rest client registryclientfactory create java lang string java util map io apicurio rest client auth auth on init might cause nosuchelementexception org apache kafka common kafkaexception failed to construct kafka consumer at org apache kafka clients consumer kafkaconsumer kafkaconsumer java at org springframework kafka core defaultkafkaconsumerfactory createrawconsumer defaultkafkaconsumerfactory java at org springframework kafka core defaultkafkaconsumerfactory createkafkaconsumer defaultkafkaconsumerfactory java at org springframework kafka core defaultkafkaconsumerfactory createconsumerwithadjustedproperties defaultkafkaconsumerfactory java at org springframework kafka core defaultkafkaconsumerfactory createkafkaconsumer defaultkafkaconsumerfactory java at org springframework kafka core defaultkafkaconsumerfactory createconsumer defaultkafkaconsumerfactory java at io opentracing contrib kafka spring tracingconsumerfactory createconsumer tracingconsumerfactory java at org springframework kafka listener kafkamessagelistenercontainer listenerconsumer kafkamessagelistenercontainer java at org springframework kafka listener kafkamessagelistenercontainer dostart kafkamessagelistenercontainer java at org springframework kafka listener abstractmessagelistenercontainer start abstractmessagelistenercontainer java at org springframework kafka listener concurrentmessagelistenercontainer dostart concurrentmessagelistenercontainer java at org springframework kafka listener abstractmessagelistenercontainer start abstractmessagelistenercontainer java at org springframework kafka config kafkalistenerendpointregistry startifnecessary kafkalistenerendpointregistry java at org springframework kafka config kafkalistenerendpointregistry registerlistenercontainer kafkalistenerendpointregistry java at org springframework kafka config kafkalistenerendpointregistrar registerendpoint kafkalistenerendpointregistrar java at org springframework kafka annotation kafkalistenerannotationbeanpostprocessor processlistener kafkalistenerannotationbeanpostprocessor java at org springframework kafka annotation kafkalistenerannotationbeanpostprocessor processkafkalistener kafkalistenerannotationbeanpostprocessor java at org springframework kafka annotation kafkalistenerannotationbeanpostprocessor postprocessafterinitialization kafkalistenerannotationbeanpostprocessor java at org springframework beans factory support abstractautowirecapablebeanfactory applybeanpostprocessorsafterinitialization abstractautowirecapablebeanfactory java at org springframework beans factory support abstractautowirecapablebeanfactory initializebean abstractautowirecapablebeanfactory java at org springframework beans factory support abstractautowirecapablebeanfactory initializebean abstractautowirecapablebeanfactory java at org springframework test context support dependencyinjectiontestexecutionlistener injectdependencies dependencyinjectiontestexecutionlistener java at org springframework test context support dependencyinjectiontestexecutionlistener preparetestinstance dependencyinjectiontestexecutionlistener java at org springframework test context testcontextmanager preparetestinstance testcontextmanager java at org springframework test context junit jupiter springextension postprocesstestinstance springextension java at org junit jupiter engine descriptor classbasedtestdescriptor lambda invoketestinstancepostprocessors classbasedtestdescriptor java at org junit jupiter engine descriptor classbasedtestdescriptor executeandmaskthrowable classbasedtestdescriptor java at org junit jupiter engine descriptor classbasedtestdescriptor lambda invoketestinstancepostprocessors classbasedtestdescriptor java at java util stream referencepipeline accept referencepipeline java at java util stream referencepipeline accept referencepipeline java at java util arraylist arraylistspliterator foreachremaining arraylist java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java util stream streamspliterators wrappingspliterator foreachremaining streamspliterators java at java util stream streams concatspliterator foreachremaining streams java at java util stream streams concatspliterator foreachremaining streams java at java util stream referencepipeline head foreach referencepipeline java at org junit jupiter engine descriptor classbasedtestdescriptor invoketestinstancepostprocessors classbasedtestdescriptor java at org junit jupiter engine descriptor classbasedtestdescriptor lambda instantiateandpostprocesstestinstance classbasedtestdescriptor java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit jupiter engine descriptor classbasedtestdescriptor instantiateandpostprocesstestinstance classbasedtestdescriptor java at org junit jupiter engine descriptor classbasedtestdescriptor lambda testinstancesprovider classbasedtestdescriptor java at java util optional orelseget optional java at org junit jupiter engine descriptor classbasedtestdescriptor lambda testinstancesprovider classbasedtestdescriptor java at org junit jupiter engine execution testinstancesprovider gettestinstances testinstancesprovider java at org junit jupiter engine descriptor testmethodtestdescriptor lambda prepare testmethodtestdescriptor java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit jupiter engine descriptor testmethodtestdescriptor prepare testmethodtestdescriptor java at org junit jupiter engine descriptor testmethodtestdescriptor prepare testmethodtestdescriptor java at org junit platform engine support hierarchical nodetesttask lambda prepare nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask prepare nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice exclusivetask compute forkjoinpoolhierarchicaltestexecutorservice java at java util concurrent recursiveaction exec recursiveaction java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjointask dojoin forkjointask java at java util concurrent forkjointask join forkjointask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice joinconcurrenttasksinreverseordertoenableworkstealing forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice invokeall forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical node around node java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice exclusivetask compute forkjoinpoolhierarchicaltestexecutorservice java at java util concurrent recursiveaction exec recursiveaction java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjointask dojoin forkjointask java at java util concurrent forkjointask join forkjointask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice joinconcurrenttasksinreverseordertoenableworkstealing forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice invokeall forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical node around node java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice exclusivetask compute forkjoinpoolhierarchicaltestexecutorservice java at java util concurrent recursiveaction exec recursiveaction java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue toplevelexec forkjoinpool java at java util concurrent forkjoinpool scan forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java caused by java lang illegalstateexception java util nosuchelementexception at io apicurio registry serde abstractschemaresolver configure abstractschemaresolver java at io apicurio registry serde defaultschemaresolver configure defaultschemaresolver java at io apicurio registry serde schemaresolverconfigurer configure schemaresolverconfigurer java at io apicurio registry serde abstractkafkaserde configure abstractkafkaserde java at io apicurio registry serde abstractkafkadeserializer configure abstractkafkadeserializer java at io apicurio registry serde avro avrokafkadeserializer configure avrokafkadeserializer java at org springframework kafka support serializer errorhandlingdeserializer configure errorhandlingdeserializer java at org apache kafka clients consumer kafkaconsumer kafkaconsumer java more caused by java util nosuchelementexception at java lang compoundenumeration nextelement classloader java at java util serviceloader lazyclasspathlookupiterator nextproviderclass serviceloader java at java util serviceloader lazyclasspathlookupiterator hasnextservice serviceloader java at java util serviceloader lazyclasspathlookupiterator hasnext serviceloader java at java util serviceloader next serviceloader java at java util serviceloader next serviceloader java at java util serviceloader next serviceloader java at io apicurio registry rest client registryclientfactory resolveproviderinstance registryclientfactory java at io apicurio registry rest client registryclientfactory create registryclientfactory java at io apicurio registry rest client registryclientfactory create registryclientfactory java at io apicurio registry serde abstractschemaresolver configure abstractschemaresolver java at io apicurio registry serde defaultschemaresolver configure defaultschemaresolver java at io apicurio registry serde schemaresolverconfigurer configure schemaresolverconfigurer java at io apicurio registry serde abstractkafkaserde configure abstractkafkaserde java at io apicurio registry serde abstractkafkadeserializer configure abstractkafkadeserializer java at io apicurio registry serde avro avrokafkadeserializer configure avrokafkadeserializer java at org springframework kafka support serializer errorhandlingdeserializer configure errorhandlingdeserializer java at org apache kafka clients consumer kafkaconsumer kafkaconsumer java more | 1 |
603,175 | 18,530,699,472 | IssuesEvent | 2021-10-21 05:21:05 | AY2122S1-CS2113-T16-2/tp | https://api.github.com/repos/AY2122S1-CS2113-T16-2/tp | opened | As a helper, I can store the information of the application into a text file | type.Story priority.High | ... so that the data can be easily imported. | 1.0 | As a helper, I can store the information of the application into a text file - ... so that the data can be easily imported. | priority | as a helper i can store the information of the application into a text file so that the data can be easily imported | 1 |
105,210 | 4,232,494,243 | IssuesEvent | 2016-07-05 00:00:04 | flipdazed/Hybrid-Monte-Carlo | https://api.github.com/repos/flipdazed/Hybrid-Monte-Carlo | closed | Unit Test: Expected Change in Hamiltonian < exp{-\delta H} > == 1 | Enhancement High Priority | **Aim**
Ensures that the acceptance rate of HMC is high and that energy change is small
**Implementation**
- Calculate the energy from all sites in an HMC move
- Average over all moves | 1.0 | Unit Test: Expected Change in Hamiltonian < exp{-\delta H} > == 1 - **Aim**
Ensures that the acceptance rate of HMC is high and that energy change is small
**Implementation**
- Calculate the energy from all sites in an HMC move
- Average over all moves | priority | unit test expected change in hamiltonian aim ensures that the acceptance rate of hmc is high and that energy change is small implementation calculate the energy from all sites in an hmc move average over all moves | 1 |
56,707 | 3,081,044,846 | IssuesEvent | 2015-08-22 09:30:07 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Дедлок при выходе из спящего режима. | bug imported Priority-High | _From [anonymou...@gmail.com](https://code.google.com/u/116679534817362406064/) on July 18, 2012 19:06:49_
flylink r502 -beta44-x64, win7
Я отправляю комп в суспенд на ночь и на время работы. Иногда при выходе из суспенда флай залочивается. Интерфейс при этом работает: вкладки переключаются, только с момента дедлока в чатики не добавляются новые сообщения; при попытке закрыть вкладку вешается уже основательно.
Приводит к тому, что, поработав дня 4, флай приходится убивать.
Началось где-то с 35-й беты или около того, раньше не было.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=791_ | 1.0 | Дедлок при выходе из спящего режима. - _From [anonymou...@gmail.com](https://code.google.com/u/116679534817362406064/) on July 18, 2012 19:06:49_
flylink r502 -beta44-x64, win7
Я отправляю комп в суспенд на ночь и на время работы. Иногда при выходе из суспенда флай залочивается. Интерфейс при этом работает: вкладки переключаются, только с момента дедлока в чатики не добавляются новые сообщения; при попытке закрыть вкладку вешается уже основательно.
Приводит к тому, что, поработав дня 4, флай приходится убивать.
Началось где-то с 35-й беты или около того, раньше не было.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=791_ | priority | дедлок при выходе из спящего режима from on july flylink я отправляю комп в суспенд на ночь и на время работы иногда при выходе из суспенда флай залочивается интерфейс при этом работает вкладки переключаются только с момента дедлока в чатики не добавляются новые сообщения при попытке закрыть вкладку вешается уже основательно приводит к тому что поработав дня флай приходится убивать началось где то с й беты или около того раньше не было original issue | 1 |
118,825 | 4,756,621,884 | IssuesEvent | 2016-10-24 14:29:51 | mulesoft/api-workbench | https://api.github.com/repos/mulesoft/api-workbench | closed | Incorrect parsing of example when discriminatorValue is used with union type | bug priority:high | ```yaml
#%RAML 1.0
title: Example API
types:
Base:
type: object
discriminator: type
properties:
type: string
TypeA:
type: Base
discriminatorValue: type_a
TypeB:
type: Base
discriminatorValue: type_b
Optional:
type: object
properties:
attr: Base?
example:
attr:
type: type_a
```
The code example above fails to parse in my installation of API Workbench. This is the error I'm getting:
> Union type option does not pass validation (Base: None of the 'Base' type known subtypes declare 'type_a' as value of discriminating property 'type'.
Changing `attr: Base?` to `attr?: Base` solves the problem, but I need to express attribute that accepts null values.
API Workbench version is 0.8.38 and Atom.io version is 1.11.1 | 1.0 | Incorrect parsing of example when discriminatorValue is used with union type - ```yaml
#%RAML 1.0
title: Example API
types:
Base:
type: object
discriminator: type
properties:
type: string
TypeA:
type: Base
discriminatorValue: type_a
TypeB:
type: Base
discriminatorValue: type_b
Optional:
type: object
properties:
attr: Base?
example:
attr:
type: type_a
```
The code example above fails to parse in my installation of API Workbench. This is the error I'm getting:
> Union type option does not pass validation (Base: None of the 'Base' type known subtypes declare 'type_a' as value of discriminating property 'type'.
Changing `attr: Base?` to `attr?: Base` solves the problem, but I need to express attribute that accepts null values.
API Workbench version is 0.8.38 and Atom.io version is 1.11.1 | priority | incorrect parsing of example when discriminatorvalue is used with union type yaml raml title example api types base type object discriminator type properties type string typea type base discriminatorvalue type a typeb type base discriminatorvalue type b optional type object properties attr base example attr type type a the code example above fails to parse in my installation of api workbench this is the error i m getting union type option does not pass validation base none of the base type known subtypes declare type a as value of discriminating property type changing attr base to attr base solves the problem but i need to express attribute that accepts null values api workbench version is and atom io version is | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.