Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
605,635 | 18,738,159,500 | IssuesEvent | 2021-11-04 10:19:59 | geolonia/normalize-japanese-addresses | https://api.github.com/repos/geolonia/normalize-japanese-addresses | closed | 町丁目レベルに長音符 (「ー」) を含む住所で、町丁目レベルの正規化が失敗する | Priority: High bug Impact: Large Time: Short | お世話になっております。
v2.3.2 を使っているのですが、町丁目レベル (level 3) に長音符 (「ー」) を含む住所で、町丁目レベルの正規化が失敗する事象が確認されております (おそらく v2.3.3 でも再現すると思われます)。
例えば、「広島市西区商工センター六丁目9番39号」というような住所です。README 記載のデモページでも試してみたところ、同様の事象が再現されました。

この広島市西区商工センター六丁目は、下画像のように住所データの CSV には該当する住所がございました。

正規化結果を見るに、前処理で長音符(「ー」)が半角ハイフンに変換されるため、住所データの該当住所とマッチングできなくなっているように見受けられます。
ソースコードの https://github.com/geolonia/normalize-japanese-addresses/blob/master/src/main.ts#L72-L77 あたりの処理が原因になっているのではないかと思われます (「ー六」と長音符 + 数字の文字列にマッチするので、長音符が半角ハイフンに変換される)。
全国にはこの手の「◯◯センター◯丁目」という住所が複数あり (上の「商工センター」の他に、「流通センター」、「卸センター」など)、前処理時のルールを一部見直した方が良さそうです。 | 1.0 | 町丁目レベルに長音符 (「ー」) を含む住所で、町丁目レベルの正規化が失敗する - お世話になっております。
v2.3.2 を使っているのですが、町丁目レベル (level 3) に長音符 (「ー」) を含む住所で、町丁目レベルの正規化が失敗する事象が確認されております (おそらく v2.3.3 でも再現すると思われます)。
例えば、「広島市西区商工センター六丁目9番39号」というような住所です。README 記載のデモページでも試してみたところ、同様の事象が再現されました。

この広島市西区商工センター六丁目は、下画像のように住所データの CSV には該当する住所がございました。

正規化結果を見るに、前処理で長音符(「ー」)が半角ハイフンに変換されるため、住所データの該当住所とマッチングできなくなっているように見受けられます。
ソースコードの https://github.com/geolonia/normalize-japanese-addresses/blob/master/src/main.ts#L72-L77 あたりの処理が原因になっているのではないかと思われます (「ー六」と長音符 + 数字の文字列にマッチするので、長音符が半角ハイフンに変換される)。
全国にはこの手の「◯◯センター◯丁目」という住所が複数あり (上の「商工センター」の他に、「流通センター」、「卸センター」など)、前処理時のルールを一部見直した方が良さそうです。 | priority | 町丁目レベルに長音符 「ー」 を含む住所で、町丁目レベルの正規化が失敗する お世話になっております。 を使っているのですが、町丁目レベル level に長音符 「ー」 を含む住所で、町丁目レベルの正規化が失敗する事象が確認されております おそらく でも再現すると思われます 。 例えば、「 」というような住所です。readme 記載のデモページでも試してみたところ、同様の事象が再現されました。 この広島市西区商工センター六丁目は、下画像のように住所データの csv には該当する住所がございました。 正規化結果を見るに、前処理で長音符(「ー」)が半角ハイフンに変換されるため、住所データの該当住所とマッチングできなくなっているように見受けられます。 ソースコードの あたりの処理が原因になっているのではないかと思われます 「ー六」と長音符 数字の文字列にマッチするので、長音符が半角ハイフンに変換される 。 全国にはこの手の「◯◯センター◯丁目」という住所が複数あり 上の「商工センター」の他に、「流通センター」、「卸センター」など 、前処理時のルールを一部見直した方が良さそうです。 | 1 |
698,031 | 23,963,055,688 | IssuesEvent | 2022-09-12 21:04:02 | zitadel/zitadel | https://api.github.com/repos/zitadel/zitadel | closed | SAML 2 Implementation | OKR priority: high | Enable SAML 2.0 integration for idp as well as sp.
## Tasks
- [x] #3088
- [ ] ~~#3089~~
- [ ] ~~#3090~~
- [x] #3091
- [x] #3092
- [x] #3094
- [x] #3336
- [x] #3440
- [x] #3612
## SAML2.0 SPs we know customers are interested in
- [ ] Cohesity DataPlatform
- [ ] Rubrik CDM
- [ ] VMWare vCloud Director
- [ ] Citrix Netscaler in combination with Citrix Storefront
- [ ] AzureAD SSO (needs actions and additional configuration for the limited algorithms with SHA1)
- [ ] Google Workspace SSO
- [x] AWS SSO
- [x] Auth0
- [x] Ping-Identity
- [x] OneLogin
- [ ] Fastly
- [x] Gitlab SaaS
- [x] Atlassian Cloud
- [x] hin.ch
@stebenz feel free to extend this list with your SPs | 1.0 | SAML 2 Implementation - Enable SAML 2.0 integration for idp as well as sp.
## Tasks
- [x] #3088
- [ ] ~~#3089~~
- [ ] ~~#3090~~
- [x] #3091
- [x] #3092
- [x] #3094
- [x] #3336
- [x] #3440
- [x] #3612
## SAML2.0 SPs we know customers are interested in
- [ ] Cohesity DataPlatform
- [ ] Rubrik CDM
- [ ] VMWare vCloud Director
- [ ] Citrix Netscaler in combination with Citrix Storefront
- [ ] AzureAD SSO (needs actions and additional configuration for the limited algorithms with SHA1)
- [ ] Google Workspace SSO
- [x] AWS SSO
- [x] Auth0
- [x] Ping-Identity
- [x] OneLogin
- [ ] Fastly
- [x] Gitlab SaaS
- [x] Atlassian Cloud
- [x] hin.ch
@stebenz feel free to extend this list with your SPs | priority | saml implementation enable saml integration for idp as well as sp tasks sps we know customers are interested in cohesity dataplatform rubrik cdm vmware vcloud director citrix netscaler in combination with citrix storefront azuread sso needs actions and additional configuration for the limited algorithms with google workspace sso aws sso ping identity onelogin fastly gitlab saas atlassian cloud hin ch stebenz feel free to extend this list with your sps | 1 |
779,992 | 27,375,439,527 | IssuesEvent | 2023-02-28 05:18:51 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Parser Incorrectly Identifies as `.map()` Lang lib Method for `map<T>` type descriptors. | Type/Improvement Priority/High Team/CompilerFE Area/Parser Deferred | **Description:**
Consider the following examples:
```
public function main() {
map<string> a = {a: "dul", b: "grg"}.
map<string> b = {d: "dul", e: "fg"};
}
```
The invocation expression of variable `a`, would be parsed as `{a: dul,b: grg}.map(<> b, $missingNode$_0= {d: dul,e: fg})`, where the parsed syntax is
```
map<string> a = {a: "dul", b: "grg"}.
map MISSING[(]<string> b MISSING[,] MISSING[]= {d: "dul", e: "fg"} MISSING[)];
```
**Describe your problem(s)**
Avoid parsing as the `.map()` lang lib method for map type descriptors.
**Describe your solution(s)**
If the parser sees the `<` token right after the `map` token, can avoid parsing it as `map MISSING[(] ...`
**Related Issues (optional):**
#32955
| 1.0 | Parser Incorrectly Identifies as `.map()` Lang lib Method for `map<T>` type descriptors. - **Description:**
Consider the following examples:
```
public function main() {
map<string> a = {a: "dul", b: "grg"}.
map<string> b = {d: "dul", e: "fg"};
}
```
The invocation expression of variable `a`, would be parsed as `{a: dul,b: grg}.map(<> b, $missingNode$_0= {d: dul,e: fg})`, where the parsed syntax is
```
map<string> a = {a: "dul", b: "grg"}.
map MISSING[(]<string> b MISSING[,] MISSING[]= {d: "dul", e: "fg"} MISSING[)];
```
**Describe your problem(s)**
Avoid parsing as the `.map()` lang lib method for map type descriptors.
**Describe your solution(s)**
If the parser sees the `<` token right after the `map` token, can avoid parsing it as `map MISSING[(] ...`
**Related Issues (optional):**
#32955
| priority | parser incorrectly identifies as map lang lib method for map type descriptors description consider the following examples public function main map a a dul b grg map b d dul e fg the invocation expression of variable a would be parsed as a dul b grg map b missingnode d dul e fg where the parsed syntax is map a a dul b grg map missing b missing missing d dul e fg missing describe your problem s avoid parsing as the map lang lib method for map type descriptors describe your solution s if the parser sees the token right after the map token can avoid parsing it as map missing related issues optional | 1 |
373,064 | 11,032,719,199 | IssuesEvent | 2019-12-06 20:54:37 | Qiskit/qiskit-terra | https://api.github.com/repos/Qiskit/qiskit-terra | closed | Can transpile a multi-qubit circuit on a 1Q device with cmap=None | bug priority: high status: pending PR | <!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: latest
- **Python version**:
- **Operating system**:
### What is the current behavior?
```
qc = QuantumCircuit(2, 1)
qc.h(0)
qc.ry(0.11,1)
qc.measure([0], [0])
```
will be properly transpiled even if the backend has `n_qubits=1` and `coupling_map = None`
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
| 1.0 | Can transpile a multi-qubit circuit on a 1Q device with cmap=None - <!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: latest
- **Python version**:
- **Operating system**:
### What is the current behavior?
```
qc = QuantumCircuit(2, 1)
qc.h(0)
qc.ry(0.11,1)
qc.measure([0], [0])
```
will be properly transpiled even if the backend has `n_qubits=1` and `coupling_map = None`
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
| priority | can transpile a multi qubit circuit on a device with cmap none information qiskit terra version latest python version operating system what is the current behavior qc quantumcircuit qc h qc ry qc measure will be properly transpiled even if the backend has n qubits and coupling map none steps to reproduce the problem what is the expected behavior suggested solutions | 1 |
374,506 | 11,091,511,309 | IssuesEvent | 2019-12-15 12:55:56 | aikoofujimotoo/nhentai-cli | https://api.github.com/repos/aikoofujimotoo/nhentai-cli | closed | Split items to 25 items each items on load command | enhancement high-priority | Instead listing all items in the text file, split them to display 25 items.
The scenario would be,
1. The CLI will count all total codes found on the file provided
2. Uses `async.eachLimit` with limit of 25
Tasklist:
- [x] Integrity checking (see commit e97a0dd)
- [x] Download resumability (see commit 0e1b5f6 & 194826f)
- [x] This actual issue (see commit 63429d8) | 1.0 | Split items to 25 items each items on load command - Instead listing all items in the text file, split them to display 25 items.
The scenario would be,
1. The CLI will count all total codes found on the file provided
2. Uses `async.eachLimit` with limit of 25
Tasklist:
- [x] Integrity checking (see commit e97a0dd)
- [x] Download resumability (see commit 0e1b5f6 & 194826f)
- [x] This actual issue (see commit 63429d8) | priority | split items to items each items on load command instead listing all items in the text file split them to display items the scenario would be the cli will count all total codes found on the file provided uses async eachlimit with limit of tasklist integrity checking see commit download resumability see commit this actual issue see commit | 1 |
141,881 | 5,446,294,070 | IssuesEvent | 2017-03-07 10:14:20 | DevopediaOrg/webapp | https://api.github.com/repos/DevopediaOrg/webapp | closed | Notifications system | enhancement high priority | Implement a system that includes these:
- Someone edited an article after your edit
- Someone upvoted/reviewed your edit on the discussion board
- System changes
Email is not to be used for notifications. Instead we will show a visible (red) indicator (like on FB). | 1.0 | Notifications system - Implement a system that includes these:
- Someone edited an article after your edit
- Someone upvoted/reviewed your edit on the discussion board
- System changes
Email is not to be used for notifications. Instead we will show a visible (red) indicator (like on FB). | priority | notifications system implement a system that includes these someone edited an article after your edit someone upvoted reviewed your edit on the discussion board system changes email is not to be used for notifications instead we will show a visible red indicator like on fb | 1 |
688,026 | 23,545,795,647 | IssuesEvent | 2022-08-21 04:40:50 | phyloref/klados | https://api.github.com/repos/phyloref/klados | closed | Add ability to export the Phyx View as a CSV | programming only difficulty: moderate priority: high | The Phyx View includes a useful summary of the state of the Phyx file: a list of all the phyloreferences in the file, with information on whether or not they resolved as expected on the reference phylogeny. It might be useful to be able to export this information as a CSV file. | 1.0 | Add ability to export the Phyx View as a CSV - The Phyx View includes a useful summary of the state of the Phyx file: a list of all the phyloreferences in the file, with information on whether or not they resolved as expected on the reference phylogeny. It might be useful to be able to export this information as a CSV file. | priority | add ability to export the phyx view as a csv the phyx view includes a useful summary of the state of the phyx file a list of all the phyloreferences in the file with information on whether or not they resolved as expected on the reference phylogeny it might be useful to be able to export this information as a csv file | 1 |
786,411 | 27,645,646,849 | IssuesEvent | 2023-03-10 22:39:57 | mikeblazanin/gcplyr | https://api.github.com/repos/mikeblazanin/gcplyr | closed | Currently merge_dfs drops by complete.cases but make_design uses "NA" for missing entries | type: enhancement priority: high | Desired behavior should probably be for make_design to use real NA's in output rather than character NA's | 1.0 | Currently merge_dfs drops by complete.cases but make_design uses "NA" for missing entries - Desired behavior should probably be for make_design to use real NA's in output rather than character NA's | priority | currently merge dfs drops by complete cases but make design uses na for missing entries desired behavior should probably be for make design to use real na s in output rather than character na s | 1 |
632,982 | 20,241,024,244 | IssuesEvent | 2022-02-14 09:18:19 | PoProstuMieciek/wikipedia-scraper | https://api.github.com/repos/PoProstuMieciek/wikipedia-scraper | opened | feat/subpage-relations | priority: high type: feat scope: database | **AC**
- depends on #45
- [ ] subpages => one-to-many => statistics #44
- [ ] subpages => one-to-many => links #46
- [ ] subpages => one-to-many => images #43 | 1.0 | feat/subpage-relations - **AC**
- depends on #45
- [ ] subpages => one-to-many => statistics #44
- [ ] subpages => one-to-many => links #46
- [ ] subpages => one-to-many => images #43 | priority | feat subpage relations ac depends on subpages one to many statistics subpages one to many links subpages one to many images | 1 |
525,180 | 15,239,798,011 | IssuesEvent | 2021-02-19 05:20:44 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Downloaded logs are not being integrated | 7.0.0-rc4 Bug: ???? Bug: development Docs: not needed Effort: small Module: vaccines Priority: high | ## Describe the bug
Logs being downloaded are not being saved into realm. I have been waiting for 30 minutes and no logs have downloaded!
### To reproduce
1. Go to Add a sensor
2. Think the first lot of logs will download
3. wait awhile
3. no more will be saved after that
### Expected behaviour
Logs should be saved
### Proposed Solution
Thinks its to do with this: https://github.com/openmsupply/mobile/blob/3cfacf047f7e99cd3b58ec17136520eed16d622e/src/actions/Bluetooth/SensorDownloadActions.js#L107-L114
The `nextPossibleLogTime` being passed into `calculateNumberToSave` should be a unix timestamp?
### Version and device info
- App version: 7.0.0-rc4
- Tablet model: N/A
- OS version: N/A
### Additional context
N/A
| 1.0 | Downloaded logs are not being integrated - ## Describe the bug
Logs being downloaded are not being saved into realm. I have been waiting for 30 minutes and no logs have downloaded!
### To reproduce
1. Go to Add a sensor
2. Think the first lot of logs will download
3. wait awhile
3. no more will be saved after that
### Expected behaviour
Logs should be saved
### Proposed Solution
Thinks its to do with this: https://github.com/openmsupply/mobile/blob/3cfacf047f7e99cd3b58ec17136520eed16d622e/src/actions/Bluetooth/SensorDownloadActions.js#L107-L114
The `nextPossibleLogTime` being passed into `calculateNumberToSave` should be a unix timestamp?
### Version and device info
- App version: 7.0.0-rc4
- Tablet model: N/A
- OS version: N/A
### Additional context
N/A
| priority | downloaded logs are not being integrated describe the bug logs being downloaded are not being saved into realm i have been waiting for minutes and no logs have downloaded to reproduce go to add a sensor think the first lot of logs will download wait awhile no more will be saved after that expected behaviour logs should be saved proposed solution thinks its to do with this the nextpossiblelogtime being passed into calculatenumbertosave should be a unix timestamp version and device info app version tablet model n a os version n a additional context n a | 1 |
507,967 | 14,685,847,799 | IssuesEvent | 2021-01-01 11:41:26 | projectdiscovery/nuclei | https://api.github.com/repos/projectdiscovery/nuclei | closed | [issue] bug with matched part (all) | Priority: High Type: Bug | Template:-
```yaml
part: all
```
Expected, matcher looks for body + header.
Currently, matcher looks for the only body. | 1.0 | [issue] bug with matched part (all) - Template:-
```yaml
part: all
```
Expected, matcher looks for body + header.
Currently, matcher looks for the only body. | priority | bug with matched part all template yaml part all expected matcher looks for body header currently matcher looks for the only body | 1 |
286,113 | 8,784,039,331 | IssuesEvent | 2018-12-20 08:37:03 | projectacrn/acrn-hypervisor | https://api.github.com/repos/projectacrn/acrn-hypervisor | closed | GPU Mediator shall provide for inter-domain priorities | area: hypervisor priority: high status: closed type: feature | GPU Mediator shall provide for inter'-domain priorities Graphics mediator shall provide for different priorities between the individual domains. by default workloads in the service OS shall have the highest priority for execution but other guest domains may have their own relative rankings in priority. It should be possible, if for example the instrument cluster workload is running in a guest domain, for a domain to have a priority higher than all others with graphics workloads and attain the same QoS as if that workload were in the service OS. Priority will establish whether a workload about to be scheduled will or will not cause the current workload to be pre'-empted or reset. Only if the incoming workload is higher priority will a pre'-emption/reset flow initiate. | 1.0 | GPU Mediator shall provide for inter-domain priorities - GPU Mediator shall provide for inter'-domain priorities Graphics mediator shall provide for different priorities between the individual domains. by default workloads in the service OS shall have the highest priority for execution but other guest domains may have their own relative rankings in priority. It should be possible, if for example the instrument cluster workload is running in a guest domain, for a domain to have a priority higher than all others with graphics workloads and attain the same QoS as if that workload were in the service OS. Priority will establish whether a workload about to be scheduled will or will not cause the current workload to be pre'-empted or reset. Only if the incoming workload is higher priority will a pre'-emption/reset flow initiate. | priority | gpu mediator shall provide for inter domain priorities gpu mediator shall provide for inter domain priorities graphics mediator shall provide for different priorities between the individual domains by default workloads in the service os shall have the highest priority for execution but other guest domains may have their own relative rankings in priority it should be possible if for example the instrument cluster workload is running in a guest domain for a domain to have a priority higher than all others with graphics workloads and attain the same qos as if that workload were in the service os priority will establish whether a workload about to be scheduled will or will not cause the current workload to be pre empted or reset only if the incoming workload is higher priority will a pre emption reset flow initiate | 1 |
355,275 | 10,578,538,531 | IssuesEvent | 2019-10-07 23:02:19 | ClinGen/clincoded | https://api.github.com/repos/ClinGen/clincoded | opened | Fix GDMs with duplicate evidence | EP request GCI bug curation edit priority: high | 15 GDMs were transferred across from individuals to the ID/Autism affiliation using the GCI tool. Some of the evidence was copied twice. Kelly will determine which of the duplicate evidence should be kept, and which should be deleted.
<img width="757" alt="Screen Shot 2019-10-07 at 1 19 24 PM" src="https://user-images.githubusercontent.com/15131169/66355082-bca79980-e91b-11e9-9104-38e3acf118d4.png">
| 1.0 | Fix GDMs with duplicate evidence - 15 GDMs were transferred across from individuals to the ID/Autism affiliation using the GCI tool. Some of the evidence was copied twice. Kelly will determine which of the duplicate evidence should be kept, and which should be deleted.
<img width="757" alt="Screen Shot 2019-10-07 at 1 19 24 PM" src="https://user-images.githubusercontent.com/15131169/66355082-bca79980-e91b-11e9-9104-38e3acf118d4.png">
| priority | fix gdms with duplicate evidence gdms were transferred across from individuals to the id autism affiliation using the gci tool some of the evidence was copied twice kelly will determine which of the duplicate evidence should be kept and which should be deleted img width alt screen shot at pm src | 1 |
266,050 | 8,362,217,879 | IssuesEvent | 2018-10-03 16:12:31 | CS2103-AY1819S1-F10-4/main | https://api.github.com/repos/CS2103-AY1819S1-F10-4/main | opened | Hashing password causes junit tests to fail | priority.high severity.high type.bug | This issue is still happening as of PR #38 and is scheduled to be fixed by milestone `v1.2`.
**Describe the bug**
Despite the raw password being the same, upon hashing, it results in a different hash due to the use of `salt`. This raises some issue especially in the unit testing such as comparing whether two accounts with the same username and password is equivalent returning false.
At the mean time, the code to hash the password upon adding the account into the `AddressBook` has been commented out.
**To Reproduce**
1. Go to `AddressBook.java`
2. Uncomment the line `account.getPassword().hash(account.getUsername().toString());`
3. Run the test cases
4. See errors
**Expected behavior**
Ideally, it does not affect the existing test cases. One way to do that is to either have a constant salt (which is really, really, really not recommended), or generating the salt based on the user account itself (e.g. using the username).
| 1.0 | Hashing password causes junit tests to fail - This issue is still happening as of PR #38 and is scheduled to be fixed by milestone `v1.2`.
**Describe the bug**
Despite the raw password being the same, upon hashing, it results in a different hash due to the use of `salt`. This raises some issue especially in the unit testing such as comparing whether two accounts with the same username and password is equivalent returning false.
At the mean time, the code to hash the password upon adding the account into the `AddressBook` has been commented out.
**To Reproduce**
1. Go to `AddressBook.java`
2. Uncomment the line `account.getPassword().hash(account.getUsername().toString());`
3. Run the test cases
4. See errors
**Expected behavior**
Ideally, it does not affect the existing test cases. One way to do that is to either have a constant salt (which is really, really, really not recommended), or generating the salt based on the user account itself (e.g. using the username).
| priority | hashing password causes junit tests to fail this issue is still happening as of pr and is scheduled to be fixed by milestone describe the bug despite the raw password being the same upon hashing it results in a different hash due to the use of salt this raises some issue especially in the unit testing such as comparing whether two accounts with the same username and password is equivalent returning false at the mean time the code to hash the password upon adding the account into the addressbook has been commented out to reproduce go to addressbook java uncomment the line account getpassword hash account getusername tostring run the test cases see errors expected behavior ideally it does not affect the existing test cases one way to do that is to either have a constant salt which is really really really not recommended or generating the salt based on the user account itself e g using the username | 1 |
619,281 | 19,520,945,325 | IssuesEvent | 2021-12-29 18:19:42 | openedx/build-test-release-wg | https://api.github.com/repos/openedx/build-test-release-wg | closed | User Testing of Maple | priority:high affects:maple | User Testing of Maple.
[LMS](lms.maple-btr.edunext.link)
[Studio](studio.maple-btr.edunext.link)
[ecommerce.lms.maple-btr.edunext.link](ecommerce.lms.maple-btr.edunext.link)
[MFE](apps.lms.maple-btr.edunext.link)
Once you've created an account comment here and we will give you any permissions you need.
PR for the release notes, if you have any specific comments: https://github.com/edx/edx-documentation/pull/1994 | 1.0 | User Testing of Maple - User Testing of Maple.
[LMS](lms.maple-btr.edunext.link)
[Studio](studio.maple-btr.edunext.link)
[ecommerce.lms.maple-btr.edunext.link](ecommerce.lms.maple-btr.edunext.link)
[MFE](apps.lms.maple-btr.edunext.link)
Once you've created an account comment here and we will give you any permissions you need.
PR for the release notes, if you have any specific comments: https://github.com/edx/edx-documentation/pull/1994 | priority | user testing of maple user testing of maple lms maple btr edunext link studio maple btr edunext link ecommerce lms maple btr edunext link apps lms maple btr edunext link once you ve created an account comment here and we will give you any permissions you need pr for the release notes if you have any specific comments | 1 |
703,715 | 24,171,118,504 | IssuesEvent | 2022-09-22 19:19:10 | ssec/polar2grid | https://api.github.com/repos/ssec/polar2grid | closed | MODIS overpass version 2.3 version 3.0 processing speeds | bug optimization priority:high | I wrote scripts to time the creation of all P2G MODIS GeoTIFF default bands for a 15 minutes pass using the default WGS84 dynamic grid. For Version 2.3 P2G I added the times together that it took create the default images for crefl (true and false color) and for the vis/ir bands. For P2G version 3.0, I used 4 workers, which I think is the default. Here are the results:
P2G Version 2.3: 8m22s
P2G Version 3.0: 15m33s
It took almost twice as long using more CPU's to create the images using P2G v3.0 than it did with V2.3 on the machine bumi. I cannot imagine releasing software until this is improved. Dynamic grids are the way the software is most used by Liam and I in SSEC, and I suspect most used by the community too. | 1.0 | MODIS overpass version 2.3 version 3.0 processing speeds - I wrote scripts to time the creation of all P2G MODIS GeoTIFF default bands for a 15 minutes pass using the default WGS84 dynamic grid. For Version 2.3 P2G I added the times together that it took create the default images for crefl (true and false color) and for the vis/ir bands. For P2G version 3.0, I used 4 workers, which I think is the default. Here are the results:
P2G Version 2.3: 8m22s
P2G Version 3.0: 15m33s
It took almost twice as long using more CPU's to create the images using P2G v3.0 than it did with V2.3 on the machine bumi. I cannot imagine releasing software until this is improved. Dynamic grids are the way the software is most used by Liam and I in SSEC, and I suspect most used by the community too. | priority | modis overpass version version processing speeds i wrote scripts to time the creation of all modis geotiff default bands for a minutes pass using the default dynamic grid for version i added the times together that it took create the default images for crefl true and false color and for the vis ir bands for version i used workers which i think is the default here are the results version version it took almost twice as long using more cpu s to create the images using than it did with on the machine bumi i cannot imagine releasing software until this is improved dynamic grids are the way the software is most used by liam and i in ssec and i suspect most used by the community too | 1 |
281,471 | 8,695,799,686 | IssuesEvent | 2018-12-04 15:58:34 | SunwellWoW/Sunwell-TBC-Bugtracker | https://api.github.com/repos/SunwellWoW/Sunwell-TBC-Bugtracker | closed | Prospecting | Critical High Priority fixed | Description: skill is not working corectly
How it works: destroy all ores in stack and don't give gems
How it should work: destroy only 5 ores and give gems
Source (you should point out proofs of your report, please give us some source): https://tbc-twinhead.twinstar.cz/?spell=31252
| 1.0 | Prospecting - Description: skill is not working corectly
How it works: destroy all ores in stack and don't give gems
How it should work: destroy only 5 ores and give gems
Source (you should point out proofs of your report, please give us some source): https://tbc-twinhead.twinstar.cz/?spell=31252
| priority | prospecting description skill is not working corectly how it works destroy all ores in stack and don t give gems how it should work destroy only ores and give gems source you should point out proofs of your report please give us some source | 1 |
773,441 | 27,157,664,853 | IssuesEvent | 2023-02-17 09:17:00 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Add document symbol API to Semantic model | Type/NewFeature Priority/High Team/CompilerFETools Area/SemanticAPI Deferred | **Description:**
Add an API to get the top level symbols defined in a given document
| 1.0 | Add document symbol API to Semantic model - **Description:**
Add an API to get the top level symbols defined in a given document
| priority | add document symbol api to semantic model description add an api to get the top level symbols defined in a given document | 1 |
411,263 | 12,016,152,378 | IssuesEvent | 2020-04-10 15:29:52 | scality/metalk8s | https://api.github.com/repos/scality/metalk8s | closed | Documentation does not indicate default admin credentials | complexity:easy priority:high topic:authentication topic:docs topic:operations | **Component**:
'documentation'
**What happened**:
After deploying the bootstrap node, if we follow the documentation to access the MetalK8s Admin UI, we cannot login: https://metal-k8s.readthedocs.io/en/development-2.6/installation/services.html#metalk8s-gui
Actually the doc is still referring to admin/admin default creds but this has changed with the introduction of dex. It is now: admin@metalk8s.invalid/password
Also, I guess the login page screenshot in the doc is not the right one.
**What was expected**:
Doc should provide the right information and also provide a link to user management page if Platform Administrator wants to provision new admin user: https://metal-k8s.readthedocs.io/en/development-2.6/operation/cluster_and_service_configuration.html
Change the login screenshot.
| 1.0 | Documentation does not indicate default admin credentials - **Component**:
'documentation'
**What happened**:
After deploying the bootstrap node, if we follow the documentation to access the MetalK8s Admin UI, we cannot login: https://metal-k8s.readthedocs.io/en/development-2.6/installation/services.html#metalk8s-gui
Actually the doc is still referring to admin/admin default creds but this has changed with the introduction of dex. It is now: admin@metalk8s.invalid/password
Also, I guess the login page screenshot in the doc is not the right one.
**What was expected**:
Doc should provide the right information and also provide a link to user management page if Platform Administrator wants to provision new admin user: https://metal-k8s.readthedocs.io/en/development-2.6/operation/cluster_and_service_configuration.html
Change the login screenshot.
| priority | documentation does not indicate default admin credentials component documentation what happened after deploying the bootstrap node if we follow the documentation to access the admin ui we cannot login actually the doc is still referring to admin admin default creds but this has changed with the introduction of dex it is now admin invalid password also i guess the login page screenshot in the doc is not the right one what was expected doc should provide the right information and also provide a link to user management page if platform administrator wants to provision new admin user change the login screenshot | 1 |
651,216 | 21,469,857,333 | IssuesEvent | 2022-04-26 08:32:38 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Passive STS Certificate Upload UI is not consistent with other certificate upload UIs in console | ui Priority/High Severity/Major bug console Affected-5.12.0 QA-Reported | **How to reproduce:**
1. Access console
2. Create a SP with standard based application > protocol passive STS
3. Go to protocol tab > provide certificate option
4. Compare the UI for certificate upload with other applications
5. This UI is not consistent with the other certificate upload UIs
other apps

passive sts

**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
IS 5.12.0 alpha 19 | 1.0 | Passive STS Certificate Upload UI is not consistent with other certificate upload UIs in console - **How to reproduce:**
1. Access console
2. Create a SP with standard based application > protocol passive STS
3. Go to protocol tab > provide certificate option
4. Compare the UI for certificate upload with other applications
5. This UI is not consistent with the other certificate upload UIs
other apps

passive sts

**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
IS 5.12.0 alpha 19 | priority | passive sts certificate upload ui is not consistent with other certificate upload uis in console how to reproduce access console create a sp with standard based application protocol passive sts go to protocol tab provide certificate option compare the ui for certificate upload with other applications this ui is not consistent with the other certificate upload uis other apps passive sts environment information please complete the following information remove any unnecessary fields is alpha | 1 |
198,802 | 6,977,822,095 | IssuesEvent | 2017-12-12 15:43:49 | zgphp/joindin-raffler-backend | https://api.github.com/repos/zgphp/joindin-raffler-backend | opened | Add missing fos:user migration file | bug high priority | Merging #142 left the project in unusable state, because the db table is missing.
In order to prevent similar issues from happening in the future
As a contributor to this project
I need to write tests to cover the new functionality
In reference to #144 and #145 | 1.0 | Add missing fos:user migration file - Merging #142 left the project in unusable state, because the db table is missing.
In order to prevent similar issues from happening in the future
As a contributor to this project
I need to write tests to cover the new functionality
In reference to #144 and #145 | priority | add missing fos user migration file merging left the project in unusable state because the db table is missing in order to prevent similar issues from happening in the future as a contributor to this project i need to write tests to cover the new functionality in reference to and | 1 |
718,179 | 24,706,472,704 | IssuesEvent | 2022-10-19 19:32:12 | opendatahub-io/odh-dashboard | https://api.github.com/repos/opendatahub-io/odh-dashboard | closed | [DSG]: Support Delete Storage | kind/enhancement priority/high feature/dsg | ### Feature description
Need the ability to delete a storage item.
Add it to the kebab option on storage rows.
### Describe alternatives you've considered
_No response_
### Anything else?
Mocks are not included. Just prompt a confirmation modal when deleting. | 1.0 | [DSG]: Support Delete Storage - ### Feature description
Need the ability to delete a storage item.
Add it to the kebab option on storage rows.
### Describe alternatives you've considered
_No response_
### Anything else?
Mocks are not included. Just prompt a confirmation modal when deleting. | priority | support delete storage feature description need the ability to delete a storage item add it to the kebab option on storage rows describe alternatives you ve considered no response anything else mocks are not included just prompt a confirmation modal when deleting | 1 |
1,616 | 2,516,611,461 | IssuesEvent | 2015-01-16 06:07:30 | centre-for-educational-technology/edidaktikum | https://api.github.com/repos/centre-for-educational-technology/edidaktikum | closed | Grupi liikmete otsing otsib kõigi kasutajate seast, kes ei ole grupi liikmed | bug High Priority | Ehk kasutaja soovib leida liiget grupist kuid viidatud otsing teeb päringu ed liikmete hulgast, kes ei ole grupi liikmed:

| 1.0 | Grupi liikmete otsing otsib kõigi kasutajate seast, kes ei ole grupi liikmed - Ehk kasutaja soovib leida liiget grupist kuid viidatud otsing teeb päringu ed liikmete hulgast, kes ei ole grupi liikmed:

| priority | grupi liikmete otsing otsib kõigi kasutajate seast kes ei ole grupi liikmed ehk kasutaja soovib leida liiget grupist kuid viidatud otsing teeb päringu ed liikmete hulgast kes ei ole grupi liikmed | 1 |
267,854 | 8,393,801,541 | IssuesEvent | 2018-10-09 21:41:12 | material-components/material-components-web-components | https://api.github.com/repos/material-components/material-components-web-components | opened | Split components into "base" class for functionality, and "MWC" class for styling | enhancement high priority | Splitting the component classes apart this way will help with theming. If someone wants to theme an MWC component differently, or override part of the theming, they can extend the base class and only have to implement `renderStyle()`
Example:
mwc-switch-base.ts:
```ts
export class SwitchBase extends BaseComponent {
renderStyle() {}
render() {
return html`${this.renderStyle()}...`;
}
}
```
mwc-switch.ts:
```ts
import {SwitchBase} from './mwc-switch-base.js';
import {style} from './mwc-switch-css.js';
@customElement('mwc-switch')
export class Switch extends SwitchBase {
renderStyle() {
return style;
}
}
| 1.0 | Split components into "base" class for functionality, and "MWC" class for styling - Splitting the component classes apart this way will help with theming. If someone wants to theme an MWC component differently, or override part of the theming, they can extend the base class and only have to implement `renderStyle()`
Example:
mwc-switch-base.ts:
```ts
export class SwitchBase extends BaseComponent {
renderStyle() {}
render() {
return html`${this.renderStyle()}...`;
}
}
```
mwc-switch.ts:
```ts
import {SwitchBase} from './mwc-switch-base.js';
import {style} from './mwc-switch-css.js';
@customElement('mwc-switch')
export class Switch extends SwitchBase {
renderStyle() {
return style;
}
}
| priority | split components into base class for functionality and mwc class for styling splitting the component classes apart this way will help with theming if someone wants to theme an mwc component differently or override part of the theming they can extend the base class and only have to implement renderstyle example mwc switch base ts ts export class switchbase extends basecomponent renderstyle render return html this renderstyle mwc switch ts ts import switchbase from mwc switch base js import style from mwc switch css js customelement mwc switch export class switch extends switchbase renderstyle return style | 1 |
302,537 | 9,275,887,381 | IssuesEvent | 2019-03-20 00:23:29 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Block effects performance pass | Art High Priority | - all of them need instance rendering turned on
- all of them need to be Pooled Objects
- general performance pass, tweak duration, particle counts, etc | 1.0 | Block effects performance pass - - all of them need instance rendering turned on
- all of them need to be Pooled Objects
- general performance pass, tweak duration, particle counts, etc | priority | block effects performance pass all of them need instance rendering turned on all of them need to be pooled objects general performance pass tweak duration particle counts etc | 1 |
618,533 | 19,473,570,951 | IssuesEvent | 2021-12-24 07:44:46 | priyasinghppl/Test | https://api.github.com/repos/priyasinghppl/Test | closed | Test Ticket by priya singh on 24/12/2021 | >>>bug Bug: High Priority | <b>User:</b>
Priya Singh
<b>What Happened:</b>
Unable to login into the system
<b>What Should Have Happened:</b>
user should be able to log in into system
<b>Relevant Contacts:</b>
Jhon Doe
<b>Were you able to replicate the issue:</b>
Yes
<b>Replication Steps:</b>
Go to login page
<b>Additional Details:</b>
Go to login page and cick login button
<b>Troubleshooting Steps:</b>
Tried to log in
<b>SLA Level:</b>
High | 1.0 | Test Ticket by priya singh on 24/12/2021 - <b>User:</b>
Priya Singh
<b>What Happened:</b>
Unable to login into the system
<b>What Should Have Happened:</b>
user should be able to log in into system
<b>Relevant Contacts:</b>
Jhon Doe
<b>Were you able to replicate the issue:</b>
Yes
<b>Replication Steps:</b>
Go to login page
<b>Additional Details:</b>
Go to login page and cick login button
<b>Troubleshooting Steps:</b>
Tried to log in
<b>SLA Level:</b>
High | priority | test ticket by priya singh on user priya singh what happened unable to login into the system what should have happened user should be able to log in into system relevant contacts jhon doe were you able to replicate the issue yes replication steps go to login page additional details go to login page and cick login button troubleshooting steps tried to log in sla level high | 1 |
818,000 | 30,666,899,770 | IssuesEvent | 2023-07-25 19:00:45 | sandialabs/common | https://api.github.com/repos/sandialabs/common | opened | Problem collecting ping results | bug high priority | From an email I sent o James/Sam:
> OK, I see what’s going on. It has to do with time zones, specifically time zones that are east of GMT.
>
> When the API call to get the ping results is sent, the start and end dates are specified. Often we don’t care about the end date, and send ‘null’ for that, but we typically do care about the start date, which in this case looks something like this: 2023-01-25T00:00:00-07:00 The part at the end is the time zone offset
>
> -07:00 in this example is for Mountain standard time. For Malaysia, it is +08:00.
>
> The problem is the ‘+’ sign which is a special character in a URL. When passing that in a URL, + is converted into ‘ ‘, so in Malaysia, the API sees a start date of this: 2023-01-25T00:00:00 08:00
>
> That is an invalid time/date string, so it gets ignored and ‘null’ is used. When most of the other API calls get null, they just get data from the earliest time in the DB (i.e. no starting date is used in the DB query). But the network status call works slightly differently; if no start/end date is specified, it doesn’t gather the ping data.
>
> We didn’t see this in our testing because the testbed is always set to Mountain time. I’m sure this bug affects installations like in Romania as well—I suspect we just didn’t notice it until now.
>
> Now I just have to figure out if I can replace the ‘+’ with the encoded value of ‘%2b’, or if there’s a better way to handle this. | 1.0 | Problem collecting ping results - From an email I sent o James/Sam:
> OK, I see what’s going on. It has to do with time zones, specifically time zones that are east of GMT.
>
> When the API call to get the ping results is sent, the start and end dates are specified. Often we don’t care about the end date, and send ‘null’ for that, but we typically do care about the start date, which in this case looks something like this: 2023-01-25T00:00:00-07:00 The part at the end is the time zone offset
>
> -07:00 in this example is for Mountain standard time. For Malaysia, it is +08:00.
>
> The problem is the ‘+’ sign which is a special character in a URL. When passing that in a URL, + is converted into ‘ ‘, so in Malaysia, the API sees a start date of this: 2023-01-25T00:00:00 08:00
>
> That is an invalid time/date string, so it gets ignored and ‘null’ is used. When most of the other API calls get null, they just get data from the earliest time in the DB (i.e. no starting date is used in the DB query). But the network status call works slightly differently; if no start/end date is specified, it doesn’t gather the ping data.
>
> We didn’t see this in our testing because the testbed is always set to Mountain time. I’m sure this bug affects installations like in Romania as well—I suspect we just didn’t notice it until now.
>
> Now I just have to figure out if I can replace the ‘+’ with the encoded value of ‘%2b’, or if there’s a better way to handle this. | priority | problem collecting ping results from an email i sent o james sam ok i see what’s going on it has to do with time zones specifically time zones that are east of gmt when the api call to get the ping results is sent the start and end dates are specified often we don’t care about the end date and send ‘null’ for that but we typically do care about the start date which in this case looks something like this the part at the end is the time zone offset in this example is for mountain standard time for malaysia it is the problem is the ‘ ’ sign which is a special character in a url when passing that in a url is converted into ‘ ‘ so in malaysia the api sees a start date of this that is an invalid time date string so it gets ignored and ‘null’ is used when most of the other api calls get null they just get data from the earliest time in the db i e no starting date is used in the db query but the network status call works slightly differently if no start end date is specified it doesn’t gather the ping data we didn’t see this in our testing because the testbed is always set to mountain time i’m sure this bug affects installations like in romania as well—i suspect we just didn’t notice it until now now i just have to figure out if i can replace the ‘ ’ with the encoded value of ‘ ’ or if there’s a better way to handle this | 1 |
527,448 | 15,342,918,605 | IssuesEvent | 2021-02-27 18:08:01 | micronaut-projects/micronaut-core | https://api.github.com/repos/micronaut-projects/micronaut-core | closed | Compilation errors and app failed to start when using Micronaut + R2DBC | priority: high status: pr submitted | ### Task List
- [X] Steps to reproduce provided
- [X] Stacktrace (if present) provided
- [X] Example that reproduces the problem uploaded to Github
- [ ] Full description of the issue provided (see below)
### Steps to Reproduce
1. Create a new project.
2. Annotate repository interface.
3. Perform a Gradle run: `./gradlew run`
### Expected Behaviour
The application should be started without problems.
### Actual Behaviour
Application startup failed due to a strange error :
```
Task :compileGroovy FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':compileGroovy'.
> BUG! exception in phase 'semantic analysis' in source unit '/home/jk/code/workspace/bidbattle/api/bid/src/main/groovy/com/example/data/repo/UserRepository.groovy' null
```
### Environment Information
- **Operating System**: `Ubuntu 20.04.1 LTS`
- **Micronaut Version:** `micronautVersion=2.3.2-SNAPSHOT`
- **JDK Version:**
```
openjdk version "11.0.9.1" 2020-11-04
OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)
OpenJDK 64-Bit Server VM (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04, mixed mode, sharing)
```
### Example Application
[Example App on Github](https://github.com/jamil-kafi/Bid)
| 1.0 | Compilation errors and app failed to start when using Micronaut + R2DBC - ### Task List
- [X] Steps to reproduce provided
- [X] Stacktrace (if present) provided
- [X] Example that reproduces the problem uploaded to Github
- [ ] Full description of the issue provided (see below)
### Steps to Reproduce
1. Create a new project.
2. Annotate repository interface.
3. Perform a Gradle run: `./gradlew run`
### Expected Behaviour
The application should be started without problems.
### Actual Behaviour
Application startup failed due to a strange error :
```
Task :compileGroovy FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':compileGroovy'.
> BUG! exception in phase 'semantic analysis' in source unit '/home/jk/code/workspace/bidbattle/api/bid/src/main/groovy/com/example/data/repo/UserRepository.groovy' null
```
### Environment Information
- **Operating System**: `Ubuntu 20.04.1 LTS`
- **Micronaut Version:** `micronautVersion=2.3.2-SNAPSHOT`
- **JDK Version:**
```
openjdk version "11.0.9.1" 2020-11-04
OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)
OpenJDK 64-Bit Server VM (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04, mixed mode, sharing)
```
### Example Application
[Example App on Github](https://github.com/jamil-kafi/Bid)
| priority | compilation errors and app failed to start when using micronaut task list steps to reproduce provided stacktrace if present provided example that reproduces the problem uploaded to github full description of the issue provided see below steps to reproduce create a new project annotate repository interface perform a gradle run gradlew run expected behaviour the application should be started without problems actual behaviour application startup failed due to a strange error task compilegroovy failed failure build failed with an exception what went wrong execution failed for task compilegroovy bug exception in phase semantic analysis in source unit home jk code workspace bidbattle api bid src main groovy com example data repo userrepository groovy null environment information operating system ubuntu lts micronaut version micronautversion snapshot jdk version openjdk version openjdk runtime environment build ubuntu openjdk bit server vm build ubuntu mixed mode sharing example application | 1 |
652,170 | 21,524,268,099 | IssuesEvent | 2022-04-28 16:48:00 | interaction-lab/MoveToCode | https://api.github.com/repos/interaction-lab/MoveToCode | closed | Combing #99 and #100 | enhancement high priority currently_working_on | - [x] delete button for non-tracked things
- [x] build phases
- [x] errors on code shown to user
- [x] bk overboard
- [x] code issues
- [x] full code mode switch
- [x] better color scheme/less colors
- [x] make other pieces/mazes
- [x] renaming hall -> wall; turn to turn_nw
- [x] better pictures for kuri, goal, hall turn
- [x] rest of the turns
- [x] check if at goal bug rotation
- [x] log code nicely
- [x] number of things to track issue
- [x] code blocks next to spot
- [x] basic
- [x] using bounds control
- [x] rotating each individual top level block
- [x] no longer name it code start
- [x] Update curiosity scoring
- [x] making exercises
- [x] making freeplay maze mode
- [x] exercise contextual info for kuri (scaffolding etc)
- [x] what to do with kuri
- [x] move to object
- [x] point at object and interest
- [x] fix kuri y issue on plane in real life
- [x] create AI rule based that works
- [x] [Recording screen](https://docs.unity3d.com/ScriptReference/Apple.ReplayKit.ReplayKit.html) -> possibly [this repo](https://github.com/fuziki/VideoCreator) or [cross platform replay kit](https://assetstore.unity.com/packages/tools/integration/cross-platform-replay-kit-easy-screen-recording-on-ios-android-133662) -> unlikely to do (final decision is to have the students record hopefully) | 1.0 | Combing #99 and #100 - - [x] delete button for non-tracked things
- [x] build phases
- [x] errors on code shown to user
- [x] bk overboard
- [x] code issues
- [x] full code mode switch
- [x] better color scheme/less colors
- [x] make other pieces/mazes
- [x] renaming hall -> wall; turn to turn_nw
- [x] better pictures for kuri, goal, hall turn
- [x] rest of the turns
- [x] check if at goal bug rotation
- [x] log code nicely
- [x] number of things to track issue
- [x] code blocks next to spot
- [x] basic
- [x] using bounds control
- [x] rotating each individual top level block
- [x] no longer name it code start
- [x] Update curiosity scoring
- [x] making exercises
- [x] making freeplay maze mode
- [x] exercise contextual info for kuri (scaffolding etc)
- [x] what to do with kuri
- [x] move to object
- [x] point at object and interest
- [x] fix kuri y issue on plane in real life
- [x] create AI rule based that works
- [x] [Recording screen](https://docs.unity3d.com/ScriptReference/Apple.ReplayKit.ReplayKit.html) -> possibly [this repo](https://github.com/fuziki/VideoCreator) or [cross platform replay kit](https://assetstore.unity.com/packages/tools/integration/cross-platform-replay-kit-easy-screen-recording-on-ios-android-133662) -> unlikely to do (final decision is to have the students record hopefully) | priority | combing and delete button for non tracked things build phases errors on code shown to user bk overboard code issues full code mode switch better color scheme less colors make other pieces mazes renaming hall wall turn to turn nw better pictures for kuri goal hall turn rest of the turns check if at goal bug rotation log code nicely number of things to track issue code blocks next to spot basic using bounds control rotating each individual top level block no longer name it code start update curiosity scoring making exercises making freeplay maze mode exercise contextual info for kuri scaffolding etc what to do with kuri move to object point at object and interest fix kuri y issue on plane in real life create ai rule based that works possibly or unlikely to do final decision is to have the students record hopefully | 1 |
486,818 | 14,015,122,507 | IssuesEvent | 2020-10-29 12:54:58 | wazuh/wazuh-kibana-app | https://api.github.com/repos/wazuh/wazuh-kibana-app | closed | Multiple errors in the security section | bug priority/high | | Wazuh | Elastic | Rev |
| ----- | ------- | --- |
| 4.x | 7.9.x | 4004 |
I am testing in the security section, these are the errors I found.
- [ ] Frequently, the 'police' request returns an error 500
Request:
```JSON
{"method":"GET","path":"/security/policies","body":{},"id":"default"}
```
Response:
```JSON
{"message":"3013 - Wazuh Internal Error","code":3013,"statusCode":500}
```
- [ ] Navigate quickly between the tabs launches the health-check

- [ ] Sometimes the actions selector is empty

| 1.0 | Multiple errors in the security section - | Wazuh | Elastic | Rev |
| ----- | ------- | --- |
| 4.x | 7.9.x | 4004 |
I am testing in the security section, these are the errors I found.
- [ ] Frequently, the 'police' request returns an error 500
Request:
```JSON
{"method":"GET","path":"/security/policies","body":{},"id":"default"}
```
Response:
```JSON
{"message":"3013 - Wazuh Internal Error","code":3013,"statusCode":500}
```
- [ ] Navigate quickly between the tabs launches the health-check

- [ ] Sometimes the actions selector is empty

| priority | multiple errors in the security section wazuh elastic rev x x i am testing in the security section these are the errors i found frequently the police request returns an error request json method get path security policies body id default response json message wazuh internal error code statuscode navigate quickly between the tabs launches the health check sometimes the actions selector is empty | 1 |
666,513 | 22,358,183,420 | IssuesEvent | 2022-06-15 17:38:12 | 389ds/389-ds-base | https://api.github.com/repos/389ds/389-ds-base | closed | RFE: `dsidm` - add shortcuts for `initalise` and `organizationalunit` | RFE CLI Need BZ priority_high | It's tiresome to type these subcommands, especially when they use different spelling next to each other. I regularly type `initialize` or `organisationalunit`. It would be nice to have `init` and `ou` aliases in addition to alternative spelling variants. | 1.0 | RFE: `dsidm` - add shortcuts for `initalise` and `organizationalunit` - It's tiresome to type these subcommands, especially when they use different spelling next to each other. I regularly type `initialize` or `organisationalunit`. It would be nice to have `init` and `ou` aliases in addition to alternative spelling variants. | priority | rfe dsidm add shortcuts for initalise and organizationalunit it s tiresome to type these subcommands especially when they use different spelling next to each other i regularly type initialize or organisationalunit it would be nice to have init and ou aliases in addition to alternative spelling variants | 1 |
812,695 | 30,348,508,395 | IssuesEvent | 2023-07-11 17:06:26 | GlareDB/glaredb | https://api.github.com/repos/GlareDB/glaredb | closed | Lazy/eager evaluation api in python bindings | feat :sparkler: priority-high :mountain: python :snake: sprint :runner: | ## execute
`sql` => lazy evaluation,
`execute` => eager evaluation | 1.0 | Lazy/eager evaluation api in python bindings - ## execute
`sql` => lazy evaluation,
`execute` => eager evaluation | priority | lazy eager evaluation api in python bindings execute sql lazy evaluation execute eager evaluation | 1 |
404,370 | 11,856,128,591 | IssuesEvent | 2020-03-25 06:39:27 | TheCodeXTeam/Kandahar-IHS | https://api.github.com/repos/TheCodeXTeam/Kandahar-IHS | closed | Attribute and schema problem | Effort: 3 For: Database Priority: High Type: Bug | **Describe the bug**
There are two problems:
1. Attributes name problem in experience table , the attributes are OrganizationEmail and OrganizationCell but in check we called them Email and CellPhone
2.Schema problem in experience and educational background
| 1.0 | Attribute and schema problem - **Describe the bug**
There are two problems:
1. Attributes name problem in experience table , the attributes are OrganizationEmail and OrganizationCell but in check we called them Email and CellPhone
2.Schema problem in experience and educational background
| priority | attribute and schema problem describe the bug there are two problems attributes name problem in experience table the attributes are organizationemail and organizationcell but in check we called them email and cellphone schema problem in experience and educational background | 1 |
339,088 | 10,241,807,444 | IssuesEvent | 2019-08-20 02:05:45 | iotexproject/iotex-antenna | https://api.github.com/repos/iotexproject/iotex-antenna | closed | add error handling | high priority | revisit all public apis and see if arguments are checked and how errors are handled | 1.0 | add error handling - revisit all public apis and see if arguments are checked and how errors are handled | priority | add error handling revisit all public apis and see if arguments are checked and how errors are handled | 1 |
440,116 | 12,693,538,126 | IssuesEvent | 2020-06-22 03:42:04 | TerryCavanagh/diceydungeons.com | https://api.github.com/repos/TerryCavanagh/diceydungeons.com | closed | Using an inventor stolen equipment with a gamepad can cause a crash | High Priority Random Bug Lottery reported in v1.8 | I'm 99% sure this is another occurrence of the missing e.equippedby field that caused a similar crash with v1.8, which I fixed for v1.8.2. Unfortunately I've moved too far on from the v1.8 codebase now to do a v1.8.3, but this should be an easy fix for v1.9.
```
Version: v1.8.2 / 0.13.0 (mod API)
- Sess. ID: dicey_dungeons_2020-06-07_22'38'12 (started: 2020-06-07 22:38:12)
- Stacktrace:
crashed: 2020-06-07 23:42:53
duration: 01:04:40
error: ERROR in callscenemethod(Combat,update) static : Null Object Reference, stack =
Called from elements.Equipment.willbecomeready (elements/Equipment.hx line 3418)
Called from states.Combat.gamepadactivateselectedcombo (states/Combat.hx line 1684)
Called from states.Combat.updategamepad (states/Combat.hx line 1969)
Called from states.Combat.update (states/Combat.hx line 552)
``` | 1.0 | Using an inventor stolen equipment with a gamepad can cause a crash - I'm 99% sure this is another occurrence of the missing e.equippedby field that caused a similar crash with v1.8, which I fixed for v1.8.2. Unfortunately I've moved too far on from the v1.8 codebase now to do a v1.8.3, but this should be an easy fix for v1.9.
```
Version: v1.8.2 / 0.13.0 (mod API)
- Sess. ID: dicey_dungeons_2020-06-07_22'38'12 (started: 2020-06-07 22:38:12)
- Stacktrace:
crashed: 2020-06-07 23:42:53
duration: 01:04:40
error: ERROR in callscenemethod(Combat,update) static : Null Object Reference, stack =
Called from elements.Equipment.willbecomeready (elements/Equipment.hx line 3418)
Called from states.Combat.gamepadactivateselectedcombo (states/Combat.hx line 1684)
Called from states.Combat.updategamepad (states/Combat.hx line 1969)
Called from states.Combat.update (states/Combat.hx line 552)
``` | priority | using an inventor stolen equipment with a gamepad can cause a crash i m sure this is another occurrence of the missing e equippedby field that caused a similar crash with which i fixed for unfortunately i ve moved too far on from the codebase now to do a but this should be an easy fix for version mod api sess id dicey dungeons started stacktrace crashed duration error error in callscenemethod combat update static null object reference stack called from elements equipment willbecomeready elements equipment hx line called from states combat gamepadactivateselectedcombo states combat hx line called from states combat updategamepad states combat hx line called from states combat update states combat hx line | 1 |
418,716 | 12,202,652,672 | IssuesEvent | 2020-04-30 09:17:48 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | When non-admin user made a reply and create, favorite or subscribe to a discussion under a hidden or private group, it does not show on the user's profile page. | bug priority: high | **Describe the bug**
When non-admin user made a reply and create, favorite or subscribe to a discussion under a hidden or private group, it does not show on the user's profile page.
**To Reproduce**
Steps to reproduce the behavior:
1. Login using a non-admin account
2. Go to any hidden or private groups that you are a member
3. Under the discussions tab, reply, create, favorite or subscribe to a discussion
4. Now go to the profile page and navigate to forums tab
5. All the changes you have made do not appear to any of My Discussions, My Replies, My Favorites, and Subscriptions tab.
**Expected behavior**
All activities of non-admin with group forums or discussions should appear to his profile page within My Discussions, My Replies, My Favorites, and Subscriptions tab.
**Screencast**
https://drive.google.com/file/d/1KPZV14UVVthiwPsli1JW010a6DgeCrxs/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1111963912/4613/
| 1.0 | When non-admin user made a reply and create, favorite or subscribe to a discussion under a hidden or private group, it does not show on the user's profile page. - **Describe the bug**
When non-admin user made a reply and create, favorite or subscribe to a discussion under a hidden or private group, it does not show on the user's profile page.
**To Reproduce**
Steps to reproduce the behavior:
1. Login using a non-admin account
2. Go to any hidden or private groups that you are a member
3. Under the discussions tab, reply, create, favorite or subscribe to a discussion
4. Now go to the profile page and navigate to forums tab
5. All the changes you have made do not appear to any of My Discussions, My Replies, My Favorites, and Subscriptions tab.
**Expected behavior**
All activities of non-admin with group forums or discussions should appear to his profile page within My Discussions, My Replies, My Favorites, and Subscriptions tab.
**Screencast**
https://drive.google.com/file/d/1KPZV14UVVthiwPsli1JW010a6DgeCrxs/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1111963912/4613/
| priority | when non admin user made a reply and create favorite or subscribe to a discussion under a hidden or private group it does not show on the user s profile page describe the bug when non admin user made a reply and create favorite or subscribe to a discussion under a hidden or private group it does not show on the user s profile page to reproduce steps to reproduce the behavior login using a non admin account go to any hidden or private groups that you are a member under the discussions tab reply create favorite or subscribe to a discussion now go to the profile page and navigate to forums tab all the changes you have made do not appear to any of my discussions my replies my favorites and subscriptions tab expected behavior all activities of non admin with group forums or discussions should appear to his profile page within my discussions my replies my favorites and subscriptions tab screencast support ticket links | 1 |
530,201 | 15,418,159,236 | IssuesEvent | 2021-03-05 08:22:48 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Exception thrown when the parser traverses through an indexed expression | Area/Parser CompilerSLDump Priority/High Team/CompilerFE Type/Bug | **Description:**
The exception given below is thrown when the following code snippet is traversed through the TreeModifier of the parser. This seems to happen only when the statement resides inside a block node.
```ballerina
function foo() {
{
foo:bar[a] -> y;
}
}
```
```sh
java.lang.ClassCastException: class io.ballerina.compiler.syntax.tree.SimpleNameReferenceNode cannot be cast to class io.ballerina.compiler.syntax.tree.IdentifierToken (io.ballerina.compiler.syntax.tree.SimpleNameReferenceNode and io.ballerina.compiler.syntax.tree.IdentifierToken are in unnamed module of loader 'app')
```
**Steps to reproduce:**
**Affected Versions:**
Swan Lake Preview 4
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 1.0 | Exception thrown when the parser traverses through an indexed expression - **Description:**
The exception given below is thrown when the following code snippet is traversed through the TreeModifier of the parser. This seems to happen only when the statement resides inside a block node.
```ballerina
function foo() {
{
foo:bar[a] -> y;
}
}
```
```sh
java.lang.ClassCastException: class io.ballerina.compiler.syntax.tree.SimpleNameReferenceNode cannot be cast to class io.ballerina.compiler.syntax.tree.IdentifierToken (io.ballerina.compiler.syntax.tree.SimpleNameReferenceNode and io.ballerina.compiler.syntax.tree.IdentifierToken are in unnamed module of loader 'app')
```
**Steps to reproduce:**
**Affected Versions:**
Swan Lake Preview 4
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| priority | exception thrown when the parser traverses through an indexed expression description the exception given below is thrown when the following code snippet is traversed through the treemodifier of the parser this seems to happen only when the statement resides inside a block node ballerina function foo foo bar y sh java lang classcastexception class io ballerina compiler syntax tree simplenamereferencenode cannot be cast to class io ballerina compiler syntax tree identifiertoken io ballerina compiler syntax tree simplenamereferencenode and io ballerina compiler syntax tree identifiertoken are in unnamed module of loader app steps to reproduce affected versions swan lake preview os db other environment details and versions related issues optional suggested labels optional suggested assignees optional | 1 |
524,729 | 15,222,557,934 | IssuesEvent | 2021-02-18 00:33:36 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | The download flashing of a sensor can enter an invalid state | Bug: ???? Bug: development Docs: not needed Effort: small Priority: high | ## Describe the bug
when a sensor is under a delayed logging condition, it's 'currently downloading' flashing can start
### To reproduce
1. Go to add a sensor and give it a log delay
2. Wait for it to start downloading
4. See error
### Expected behaviour
Should never flash when under a logging delay
### Proposed Solution
TBD
### Version and device info
- App version: 7.0.0 rc3
- Tablet model:
- OS version:
### Additional context
N/A
| 1.0 | The download flashing of a sensor can enter an invalid state - ## Describe the bug
when a sensor is under a delayed logging condition, it's 'currently downloading' flashing can start
### To reproduce
1. Go to add a sensor and give it a log delay
2. Wait for it to start downloading
4. See error
### Expected behaviour
Should never flash when under a logging delay
### Proposed Solution
TBD
### Version and device info
- App version: 7.0.0 rc3
- Tablet model:
- OS version:
### Additional context
N/A
| priority | the download flashing of a sensor can enter an invalid state describe the bug when a sensor is under a delayed logging condition it s currently downloading flashing can start to reproduce go to add a sensor and give it a log delay wait for it to start downloading see error expected behaviour should never flash when under a logging delay proposed solution tbd version and device info app version tablet model os version additional context n a | 1 |
497,078 | 14,361,689,034 | IssuesEvent | 2020-11-30 18:38:45 | ChainSafe/forest | https://api.github.com/repos/ChainSafe/forest | opened | Parallel pubsub event handling in ChainSyncer | Priority: 2 - High | Acceptance Criteria
- Pubsub event handling done in parallel in the ChainSyncer
Other Info
- Performance needs to be updated. ChainSyncer is polling network event channel. It's slow right now processing (i.e. receiving events) because it's using the same thread for bitswap requests. | 1.0 | Parallel pubsub event handling in ChainSyncer - Acceptance Criteria
- Pubsub event handling done in parallel in the ChainSyncer
Other Info
- Performance needs to be updated. ChainSyncer is polling network event channel. It's slow right now processing (i.e. receiving events) because it's using the same thread for bitswap requests. | priority | parallel pubsub event handling in chainsyncer acceptance criteria pubsub event handling done in parallel in the chainsyncer other info performance needs to be updated chainsyncer is polling network event channel it s slow right now processing i e receiving events because it s using the same thread for bitswap requests | 1 |
63,023 | 3,193,866,442 | IssuesEvent | 2015-09-30 08:44:28 | fusioninventory/fusioninventory-for-glpi | https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi | closed | Clean duplicated components during upgrade process | Category: Computer inventory Component: For junior contributor Component: Found in version Priority: High Status: Closed Tracker: Bug | ---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 1467, http://forge.fusioninventory.org/issues/1467
Original Date: 2012-02-14
Original Assignee: David Durieux
---
With old version, components is x2 or more. We must modify it to works nicely after.
| 1.0 | Clean duplicated components during upgrade process - ---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 1467, http://forge.fusioninventory.org/issues/1467
Original Date: 2012-02-14
Original Assignee: David Durieux
---
With old version, components is x2 or more. We must modify it to works nicely after.
| priority | clean duplicated components during upgrade process author name david durieux ddurieux original redmine issue original date original assignee david durieux with old version components is or more we must modify it to works nicely after | 1 |
707,360 | 24,303,187,148 | IssuesEvent | 2022-09-29 15:16:18 | choderalab/covid-moonshot-ml | https://api.github.com/repos/choderalab/covid-moonshot-ml | closed | Rebase all branches | priority: high | With the forced changes in the repo's history, we need to check and rebase all the branches on top of `main`, such that the history makes sense when we try to merge them into `main`. Following instructions from https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository#fully-removing-the-data-from-github
# ToDos
- [x] Delete all the branches that are not going to be even considered (@apayne97 @kaminow )
- [ ] Rebase with the help of magic tools (PyCharm, bash, kompare, etc.) (@kaminow @ijpulidos )
# Branches list
Please check the ones that have been already _rebased_ or don't need _rebasing_. Or remove the ones that don't apply (check first item in ToDos list)
- [x] remotes/origin/HEAD -> origin/main
- [x] remotes/origin/add-data-downloading
- [x] remotes/origin/add-loss-functions
- [x] remotes/origin/add-uncertainties
- [x] remotes/origin/dataset-allow-duplicates
- [x] remotes/origin/fix-docking
- [x] remotes/origin/fix-qm9-discrete-energies
- [ ] remotes/origin/graph-nets
- [x] remotes/origin/hallucinate-fragalysis
- [x] remotes/origin/improve-documentation
- [x] remotes/origin/main
- [x] remotes/origin/major-restructure
- [x] remotes/origin/mers-docking
- [x] remotes/origin/update-kinoml
| 1.0 | Rebase all branches - With the forced changes in the repo's history, we need to check and rebase all the branches on top of `main`, such that the history makes sense when we try to merge them into `main`. Following instructions from https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository#fully-removing-the-data-from-github
# ToDos
- [x] Delete all the branches that are not going to be even considered (@apayne97 @kaminow )
- [ ] Rebase with the help of magic tools (PyCharm, bash, kompare, etc.) (@kaminow @ijpulidos )
# Branches list
Please check the ones that have been already _rebased_ or don't need _rebasing_. Or remove the ones that don't apply (check first item in ToDos list)
- [x] remotes/origin/HEAD -> origin/main
- [x] remotes/origin/add-data-downloading
- [x] remotes/origin/add-loss-functions
- [x] remotes/origin/add-uncertainties
- [x] remotes/origin/dataset-allow-duplicates
- [x] remotes/origin/fix-docking
- [x] remotes/origin/fix-qm9-discrete-energies
- [ ] remotes/origin/graph-nets
- [x] remotes/origin/hallucinate-fragalysis
- [x] remotes/origin/improve-documentation
- [x] remotes/origin/main
- [x] remotes/origin/major-restructure
- [x] remotes/origin/mers-docking
- [x] remotes/origin/update-kinoml
| priority | rebase all branches with the forced changes in the repo s history we need to check and rebase all the branches on top of main such that the history makes sense when we try to merge them into main following instructions from todos delete all the branches that are not going to be even considered kaminow rebase with the help of magic tools pycharm bash kompare etc kaminow ijpulidos branches list please check the ones that have been already rebased or don t need rebasing or remove the ones that don t apply check first item in todos list remotes origin head origin main remotes origin add data downloading remotes origin add loss functions remotes origin add uncertainties remotes origin dataset allow duplicates remotes origin fix docking remotes origin fix discrete energies remotes origin graph nets remotes origin hallucinate fragalysis remotes origin improve documentation remotes origin main remotes origin major restructure remotes origin mers docking remotes origin update kinoml | 1 |
523,262 | 15,176,663,515 | IssuesEvent | 2021-02-14 06:48:49 | dfelton/kobens-gemini | https://api.github.com/repos/dfelton/kobens-gemini | closed | Connection closed abnormally while awaiting message | Priority - High bug | ```
Shutdown Enabled at: 2021-01-15 02:49:03
Exception: Amp\Websocket\ClosedException
Code: 1006
Message: Connection closed abnormally while awaiting message; Code 1006 (ABNORMAL_CLOSE); Reason: "TCP connection closed unexpectedly"
Strace:
#0 [internal function]: Amp\Websocket\Rfc6455Client->Amp\Websocket\{closure}()
#1 vendor/amphp/amp/lib/Coroutine.php(60): Generator->current()
#2 vendor/amphp/amp/lib/functions.php(66): Amp\Coroutine->__construct(Object(Generator))
#3 vendor/amphp/websocket/src/Rfc6455Client.php(702): Amp\call(Object(Closure))
#4 vendor/amphp/websocket/src/Rfc6455Client.php(299): Amp\Websocket\Rfc6455Client->close(1006, 'TCP connection ...')
#5 [internal function]: Amp\Websocket\Rfc6455Client->read()
#6 vendor/amphp/amp/lib/Coroutine.php(105): Generator->send(NULL)
#7 vendor/amphp/amp/lib/Internal/Placeholder.php(130): Amp\Coroutine->Amp\{closure}(NULL, NULL)
#8 vendor/amphp/amp/lib/Deferred.php(45): class@anonymous->resolve(NULL)
#9 vendor/amphp/byte-stream/lib/ResourceInputStream.php(101): Amp\Deferred->resolve(NULL)
#10 vendor/amphp/amp/lib/Loop/NativeDriver.php(192): Amp\ByteStream\ResourceInputStream::Amp\ByteStream\{closure}('n', Resource id #446, NULL)
#11 vendor/amphp/amp/lib/Loop/NativeDriver.php(97): Amp\Loop\NativeDriver->selectStreams(Array, Array, 1)
#12 vendor/amphp/amp/lib/Loop/Driver.php(134): Amp\Loop\NativeDriver->dispatch(true)
#13 vendor/amphp/amp/lib/Loop/Driver.php(72): Amp\Loop\Driver->tick()
#14 vendor/amphp/amp/lib/Loop.php(84): Amp\Loop\Driver->run()
#15 src/Command/Command/TradeRepeater/FillMonitor/WebSocket.php(80): Amp\Loop::run(Object(Closure))
#16 vendor/symfony/console/Command/Command.php(255): Kobens\Gemini\Command\Command\TradeRepeater\FillMonitor\WebSocket->execute(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#17 vendor/symfony/console/Application.php(1009): Symfony\Component\Console\Command\Command->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#18 vendor/symfony/console/Application.php(273): Symfony\Component\Console\Application->doRunCommand(Object(Kobens\Gemini\Command\Command\TradeRepeater\FillMonitor\WebSocket), Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#19 vendor/symfony/console/Application.php(149): Symfony\Component\Console\Application->doRun(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#20 bin/gemini(346): Symfony\Component\Console\Application->run()
#21 {main}
``` | 1.0 | Connection closed abnormally while awaiting message - ```
Shutdown Enabled at: 2021-01-15 02:49:03
Exception: Amp\Websocket\ClosedException
Code: 1006
Message: Connection closed abnormally while awaiting message; Code 1006 (ABNORMAL_CLOSE); Reason: "TCP connection closed unexpectedly"
Strace:
#0 [internal function]: Amp\Websocket\Rfc6455Client->Amp\Websocket\{closure}()
#1 vendor/amphp/amp/lib/Coroutine.php(60): Generator->current()
#2 vendor/amphp/amp/lib/functions.php(66): Amp\Coroutine->__construct(Object(Generator))
#3 vendor/amphp/websocket/src/Rfc6455Client.php(702): Amp\call(Object(Closure))
#4 vendor/amphp/websocket/src/Rfc6455Client.php(299): Amp\Websocket\Rfc6455Client->close(1006, 'TCP connection ...')
#5 [internal function]: Amp\Websocket\Rfc6455Client->read()
#6 vendor/amphp/amp/lib/Coroutine.php(105): Generator->send(NULL)
#7 vendor/amphp/amp/lib/Internal/Placeholder.php(130): Amp\Coroutine->Amp\{closure}(NULL, NULL)
#8 vendor/amphp/amp/lib/Deferred.php(45): class@anonymous->resolve(NULL)
#9 vendor/amphp/byte-stream/lib/ResourceInputStream.php(101): Amp\Deferred->resolve(NULL)
#10 vendor/amphp/amp/lib/Loop/NativeDriver.php(192): Amp\ByteStream\ResourceInputStream::Amp\ByteStream\{closure}('n', Resource id #446, NULL)
#11 vendor/amphp/amp/lib/Loop/NativeDriver.php(97): Amp\Loop\NativeDriver->selectStreams(Array, Array, 1)
#12 vendor/amphp/amp/lib/Loop/Driver.php(134): Amp\Loop\NativeDriver->dispatch(true)
#13 vendor/amphp/amp/lib/Loop/Driver.php(72): Amp\Loop\Driver->tick()
#14 vendor/amphp/amp/lib/Loop.php(84): Amp\Loop\Driver->run()
#15 src/Command/Command/TradeRepeater/FillMonitor/WebSocket.php(80): Amp\Loop::run(Object(Closure))
#16 vendor/symfony/console/Command/Command.php(255): Kobens\Gemini\Command\Command\TradeRepeater\FillMonitor\WebSocket->execute(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#17 vendor/symfony/console/Application.php(1009): Symfony\Component\Console\Command\Command->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#18 vendor/symfony/console/Application.php(273): Symfony\Component\Console\Application->doRunCommand(Object(Kobens\Gemini\Command\Command\TradeRepeater\FillMonitor\WebSocket), Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#19 vendor/symfony/console/Application.php(149): Symfony\Component\Console\Application->doRun(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#20 bin/gemini(346): Symfony\Component\Console\Application->run()
#21 {main}
``` | priority | connection closed abnormally while awaiting message shutdown enabled at exception amp websocket closedexception code message connection closed abnormally while awaiting message code abnormal close reason tcp connection closed unexpectedly strace amp websocket amp websocket closure vendor amphp amp lib coroutine php generator current vendor amphp amp lib functions php amp coroutine construct object generator vendor amphp websocket src php amp call object closure vendor amphp websocket src php amp websocket close tcp connection amp websocket read vendor amphp amp lib coroutine php generator send null vendor amphp amp lib internal placeholder php amp coroutine amp closure null null vendor amphp amp lib deferred php class anonymous resolve null vendor amphp byte stream lib resourceinputstream php amp deferred resolve null vendor amphp amp lib loop nativedriver php amp bytestream resourceinputstream amp bytestream closure n resource id null vendor amphp amp lib loop nativedriver php amp loop nativedriver selectstreams array array vendor amphp amp lib loop driver php amp loop nativedriver dispatch true vendor amphp amp lib loop driver php amp loop driver tick vendor amphp amp lib loop php amp loop driver run src command command traderepeater fillmonitor websocket php amp loop run object closure vendor symfony console command command php kobens gemini command command traderepeater fillmonitor websocket execute object symfony component console input argvinput object symfony component console output consoleoutput vendor symfony console application php symfony component console command command run object symfony component console input argvinput object symfony component console output consoleoutput vendor symfony console application php symfony component console application doruncommand object kobens gemini command command traderepeater fillmonitor websocket object symfony component console input argvinput object symfony component console output consoleoutput vendor symfony console application php symfony component console application dorun object symfony component console input argvinput object symfony component console output consoleoutput bin gemini symfony component console application run main | 1 |
126,127 | 4,972,767,207 | IssuesEvent | 2016-12-05 22:32:31 | pelias/pelias | https://api.github.com/repos/pelias/pelias | closed | 500 Error returned for invalid request | bug High Priority | There was recently a 500 error in the production service caused by the following type of request:
`/v1/reverse?layers=address&layers=address&point.lat=undefined&point.lon=undefined%20&size=1&size=1&sources=osm&sources=osm`
We should fix that so the error coming back is in the 400 range to indicate an invalid request instead of internal server error.
Hooray for graphs! :) | 1.0 | 500 Error returned for invalid request - There was recently a 500 error in the production service caused by the following type of request:
`/v1/reverse?layers=address&layers=address&point.lat=undefined&point.lon=undefined%20&size=1&size=1&sources=osm&sources=osm`
We should fix that so the error coming back is in the 400 range to indicate an invalid request instead of internal server error.
Hooray for graphs! :) | priority | error returned for invalid request there was recently a error in the production service caused by the following type of request reverse layers address layers address point lat undefined point lon undefined size size sources osm sources osm we should fix that so the error coming back is in the range to indicate an invalid request instead of internal server error hooray for graphs | 1 |
126,974 | 5,008,513,690 | IssuesEvent | 2016-12-12 19:43:26 | brycethorup/cash-class-tracker | https://api.github.com/repos/brycethorup/cash-class-tracker | closed | Item lists to populate game lists | High Priority | For Take Stock, there are 12 possible items.
For Order Up there are 8.
Both of these lists will be in an RTF text file uploaded to the drop box. | 1.0 | Item lists to populate game lists - For Take Stock, there are 12 possible items.
For Order Up there are 8.
Both of these lists will be in an RTF text file uploaded to the drop box. | priority | item lists to populate game lists for take stock there are possible items for order up there are both of these lists will be in an rtf text file uploaded to the drop box | 1 |
217,750 | 7,327,905,098 | IssuesEvent | 2018-03-04 15:31:49 | projectwife/mtesitoo-android | https://api.github.com/repos/projectwife/mtesitoo-android | opened | When submitting a new product image upload fails and isn't handled correctly | Priority: High bug | Submitting a new product shows a 'failed to submit' style message but the product is usually successfully created. This is because, the product info is submitted successfully, but the image upload fails.
`com.tesitoo E/product add error: com.android.volley.TimeoutError`
However, the submit page doesn't proceed to the next state even though the product is created.
Expected behaviour:
The user is informed that the product is created successfully but the images weren't added.
Also, need to check why image upload fails so frequently and under what conditions. | 1.0 | When submitting a new product image upload fails and isn't handled correctly - Submitting a new product shows a 'failed to submit' style message but the product is usually successfully created. This is because, the product info is submitted successfully, but the image upload fails.
`com.tesitoo E/product add error: com.android.volley.TimeoutError`
However, the submit page doesn't proceed to the next state even though the product is created.
Expected behaviour:
The user is informed that the product is created successfully but the images weren't added.
Also, need to check why image upload fails so frequently and under what conditions. | priority | when submitting a new product image upload fails and isn t handled correctly submitting a new product shows a failed to submit style message but the product is usually successfully created this is because the product info is submitted successfully but the image upload fails com tesitoo e product add error com android volley timeouterror however the submit page doesn t proceed to the next state even though the product is created expected behaviour the user is informed that the product is created successfully but the images weren t added also need to check why image upload fails so frequently and under what conditions | 1 |
503,412 | 14,591,293,308 | IssuesEvent | 2020-12-19 12:16:14 | Project-Books/book-project | https://api.github.com/repos/Project-Books/book-project | opened | Unable to navigate to the settings page | bug high-priority impact: high | **Describe the bug**
Unable to navigate to the settings page on the demo app on AWS
**To Reproduce**
Steps to reproduce the behaviour:
1. Go to http://bookprojectv010-env.eba-22zuiphf.eu-west-2.elasticbeanstalk.com
2. Navigate to the settings page
3. See error
**Expected behaviour**
The settings page should open.
**Additional context**
The web app is running the version of the app from this [commit](https://github.com/Project-Books/book-project/commit/81ef53a697d64d9a57b1bc12dadfa02291a56308) | 1.0 | Unable to navigate to the settings page - **Describe the bug**
Unable to navigate to the settings page on the demo app on AWS
**To Reproduce**
Steps to reproduce the behaviour:
1. Go to http://bookprojectv010-env.eba-22zuiphf.eu-west-2.elasticbeanstalk.com
2. Navigate to the settings page
3. See error
**Expected behaviour**
The settings page should open.
**Additional context**
The web app is running the version of the app from this [commit](https://github.com/Project-Books/book-project/commit/81ef53a697d64d9a57b1bc12dadfa02291a56308) | priority | unable to navigate to the settings page describe the bug unable to navigate to the settings page on the demo app on aws to reproduce steps to reproduce the behaviour go to navigate to the settings page see error expected behaviour the settings page should open additional context the web app is running the version of the app from this | 1 |
98,823 | 4,031,619,848 | IssuesEvent | 2016-05-18 17:48:06 | 0mp/io-touchpad | https://api.github.com/repos/0mp/io-touchpad | opened | Consider changing the scope of the fifth iteration | priority: high status: help needed type: question/discussion | I think we are not able to produce a nice GUI in just a week. I'd change the scope of the 5th iteration and focus on the app's performance and bugs instead of GUI.
I strongly suggest that we focus on these things during the 5th iteration:
- Find out why our app is using up to 100% of the CPU.
- Solve the bugs related issues.
- Improve the tests module.
What do you think?
 | 1.0 | Consider changing the scope of the fifth iteration - I think we are not able to produce a nice GUI in just a week. I'd change the scope of the 5th iteration and focus on the app's performance and bugs instead of GUI.
I strongly suggest that we focus on these things during the 5th iteration:
- Find out why our app is using up to 100% of the CPU.
- Solve the bugs related issues.
- Improve the tests module.
What do you think?
 | priority | consider changing the scope of the fifth iteration i think we are not able to produce a nice gui in just a week i d change the scope of the iteration and focus on the app s performance and bugs instead of gui i strongly suggest that we focus on these things during the iteration find out why our app is using up to of the cpu solve the bugs related issues improve the tests module what do you think | 1 |
674,087 | 23,038,756,958 | IssuesEvent | 2022-07-22 22:45:24 | bcgov/foi-flow | https://api.github.com/repos/bcgov/foi-flow | closed | Additional Application Details getting cleared off after AXIS Sync - Personal Requests | bug high priority | **Describe the bug in current situation**
Not a deal breaker for 24th Go live . Issue is like Additional Applicant details like DOB , Correction Number, Employee Number is **getting cleared off**, even though data exists on AXIS side as well as on FLOW side. At the same time, some other information is **not** getting cleared. Need to bring consistent logic on which data gets cleared and what need to be overwritten.
**Link bug to the User Story**
**Impact of this bug**
Describe the impact, i.e. what the impact is, and number of users impacted.
Lose data. High.
**Chance of Occurring (high/medium/low/very low)**
Medium. Impacts all requests with this information.
**Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?**
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Go to Public facing app, submit a Personal FOI request with Application DOB, Correction Number, Employee Number, Child details
2. Go to FLOW app, do an advanced search for above submitted FOI request
3. Do AXIS with a request having Application DOB and other info on the FLOW 's unopened request
4. We can see DOB, and other information getting cleared off while Sync'ing even though data exists.
**Actual/ observed behaviour/ results**
**Expected behaviour**
A clear and concise description of what you expected to happen. Use the gherking language.
* GIVEN a row requests from the FOI request form includes DOB, Correction Number, Employee Number, child details etc.
* WHEN a request is synced from AXIS
* The information (DOB, Correction Number, Employee Number, child details etc.) should not be erased from the request in Flow (ie.,Fields that are not in AXIS/ that are not pulling from AXIS but exist in FOI request form should persist.)
**Screenshots/ Visual Reference/ Source**
If applicable, add screenshots to help explain your problem. You an use screengrab.
| 1.0 | Additional Application Details getting cleared off after AXIS Sync - Personal Requests - **Describe the bug in current situation**
Not a deal breaker for 24th Go live . Issue is like Additional Applicant details like DOB , Correction Number, Employee Number is **getting cleared off**, even though data exists on AXIS side as well as on FLOW side. At the same time, some other information is **not** getting cleared. Need to bring consistent logic on which data gets cleared and what need to be overwritten.
**Link bug to the User Story**
**Impact of this bug**
Describe the impact, i.e. what the impact is, and number of users impacted.
Lose data. High.
**Chance of Occurring (high/medium/low/very low)**
Medium. Impacts all requests with this information.
**Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?**
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Go to Public facing app, submit a Personal FOI request with Application DOB, Correction Number, Employee Number, Child details
2. Go to FLOW app, do an advanced search for above submitted FOI request
3. Do AXIS with a request having Application DOB and other info on the FLOW 's unopened request
4. We can see DOB, and other information getting cleared off while Sync'ing even though data exists.
**Actual/ observed behaviour/ results**
**Expected behaviour**
A clear and concise description of what you expected to happen. Use the gherking language.
* GIVEN a row requests from the FOI request form includes DOB, Correction Number, Employee Number, child details etc.
* WHEN a request is synced from AXIS
* The information (DOB, Correction Number, Employee Number, child details etc.) should not be erased from the request in Flow (ie.,Fields that are not in AXIS/ that are not pulling from AXIS but exist in FOI request form should persist.)
**Screenshots/ Visual Reference/ Source**
If applicable, add screenshots to help explain your problem. You an use screengrab.
| priority | additional application details getting cleared off after axis sync personal requests describe the bug in current situation not a deal breaker for go live issue is like additional applicant details like dob correction number employee number is getting cleared off even though data exists on axis side as well as on flow side at the same time some other information is not getting cleared need to bring consistent logic on which data gets cleared and what need to be overwritten link bug to the user story impact of this bug describe the impact i e what the impact is and number of users impacted lose data high chance of occurring high medium low very low medium impacts all requests with this information pre conditions which env any pre requesites or assumptions to execute steps steps to reproduce steps to reproduce the behavior go to public facing app submit a personal foi request with application dob correction number employee number child details go to flow app do an advanced search for above submitted foi request do axis with a request having application dob and other info on the flow s unopened request we can see dob and other information getting cleared off while sync ing even though data exists actual observed behaviour results expected behaviour a clear and concise description of what you expected to happen use the gherking language given a row requests from the foi request form includes dob correction number employee number child details etc when a request is synced from axis the information dob correction number employee number child details etc should not be erased from the request in flow ie fields that are not in axis that are not pulling from axis but exist in foi request form should persist screenshots visual reference source if applicable add screenshots to help explain your problem you an use screengrab | 1 |
664,735 | 22,286,677,504 | IssuesEvent | 2022-06-11 18:44:49 | mreishman/Log-Hog | https://api.github.com/repos/mreishman/Log-Hog | closed | Close of search doesn't clear text | bug Priority - 1 - Very High Confirmed-In-Master | Closing search box doesn't clear text, filter still applied | 1.0 | Close of search doesn't clear text - Closing search box doesn't clear text, filter still applied | priority | close of search doesn t clear text closing search box doesn t clear text filter still applied | 1 |
494,465 | 14,259,266,351 | IssuesEvent | 2020-11-20 07:58:32 | rtCamp/web-stories-wp | https://api.github.com/repos/rtCamp/web-stories-wp | opened | Inherit Typography | priority:high stage:discussion | The block should be able to inherit the typography based on the site's theme. Set a plan of implementation and any standards needed to handle font sizes for title and metadata. | 1.0 | Inherit Typography - The block should be able to inherit the typography based on the site's theme. Set a plan of implementation and any standards needed to handle font sizes for title and metadata. | priority | inherit typography the block should be able to inherit the typography based on the site s theme set a plan of implementation and any standards needed to handle font sizes for title and metadata | 1 |
621,073 | 19,576,865,350 | IssuesEvent | 2022-01-04 16:22:39 | null-seb/TFM-sbs-Backend | https://api.github.com/repos/null-seb/TFM-sbs-Backend | closed | Function User upload avatar | type: enhancement priority: high points: 8 :30m 7h | Introducing AliCloud Object Storage Service(OSS) and adding user upload avatar function | 1.0 | Function User upload avatar - Introducing AliCloud Object Storage Service(OSS) and adding user upload avatar function | priority | function user upload avatar introducing alicloud object storage service oss and adding user upload avatar function | 1 |
59,564 | 3,114,348,348 | IssuesEvent | 2015-09-03 08:12:18 | ceylon/ceylon-ide-eclipse | https://api.github.com/repos/ceylon/ceylon-ide-eclipse | opened | lots of new bugs in Rename refactoring | bug high priority | OK, so Rename is now a big mess. (Not really anybody's fault, since there were no tests, of course.)
Things I've noticed:
1. In `x.member`, placing the caret right between `x` and `.` and hitting ⌘⌥R selects `member` for renaming.
2. Very often, selecting a usage of a local variable, and ⌘⌥R selects the usage for renaming, but does not select the actual definition.
This needs to go through a full retest / bugfix process.
| 1.0 | lots of new bugs in Rename refactoring - OK, so Rename is now a big mess. (Not really anybody's fault, since there were no tests, of course.)
Things I've noticed:
1. In `x.member`, placing the caret right between `x` and `.` and hitting ⌘⌥R selects `member` for renaming.
2. Very often, selecting a usage of a local variable, and ⌘⌥R selects the usage for renaming, but does not select the actual definition.
This needs to go through a full retest / bugfix process.
| priority | lots of new bugs in rename refactoring ok so rename is now a big mess not really anybody s fault since there were no tests of course things i ve noticed in x member placing the caret right between x and and hitting ⌘⌥r selects member for renaming very often selecting a usage of a local variable and ⌘⌥r selects the usage for renaming but does not select the actual definition this needs to go through a full retest bugfix process | 1 |
534,390 | 15,615,571,717 | IssuesEvent | 2021-03-19 19:24:54 | 11ty/eleventy | https://api.github.com/repos/11ty/eleventy | closed | Eleventy 0.11.1 dependency is vulnerable - pug | high-priority npm-audit | **Describe the bug**
Vulnerable dependency message while `npm i --save @11ty/eleventy`
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir SSG | SSG
2. npm init -y
3. npm i --save @11ty/eleventy
**Expected behavior**
Dependencies shouldn't be vulnerable
**Screenshots**
`# npm audit report
pug <3.0.1
Remote Code Execution - https://npmjs.com/advisories/1643
No fix available
node_modules/pug
@11ty/eleventy <=0.11.1
Depends on vulnerable versions of pug
node_modules/@11ty/eleventy
2 high severity vulnerabilities`
**Environment:**
- OS and Version: Windows 10
- Eleventy Version : 0.11.1
| 1.0 | Eleventy 0.11.1 dependency is vulnerable - pug - **Describe the bug**
Vulnerable dependency message while `npm i --save @11ty/eleventy`
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir SSG | SSG
2. npm init -y
3. npm i --save @11ty/eleventy
**Expected behavior**
Dependencies shouldn't be vulnerable
**Screenshots**
`# npm audit report
pug <3.0.1
Remote Code Execution - https://npmjs.com/advisories/1643
No fix available
node_modules/pug
@11ty/eleventy <=0.11.1
Depends on vulnerable versions of pug
node_modules/@11ty/eleventy
2 high severity vulnerabilities`
**Environment:**
- OS and Version: Windows 10
- Eleventy Version : 0.11.1
| priority | eleventy dependency is vulnerable pug describe the bug vulnerable dependency message while npm i save eleventy to reproduce steps to reproduce the behavior mkdir ssg ssg npm init y npm i save eleventy expected behavior dependencies shouldn t be vulnerable screenshots npm audit report pug remote code execution no fix available node modules pug eleventy depends on vulnerable versions of pug node modules eleventy high severity vulnerabilities environment os and version windows eleventy version | 1 |
488,045 | 14,073,904,771 | IssuesEvent | 2020-11-04 06:09:41 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | test_prevote_reboot_minority_followers failed | priority/high severity/Major sig/raft type/bug | ## Bug Report
**What version of Rust are you using?**
rustc 1.29.0-nightly (4f3c7a472 2018-07-17)
**What operating system and CPU are you using?**
MacOS
**What did you do?**
run `make dev`
**What did you expect to see?**
test failed
**What did you see instead?**
```
thread 'raftstore::test_prevote::test_prevote_reboot_minority_followers' panicked at 'assertion failed: `(left == right)`
left: `true`,
right: `false`: Sends a PreVote or PreVoteResponse during failure.', tests/integrations/raftstore/test_prevote.rs:90:9
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:227
4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
at libstd/panicking.rs:475
5: std::panicking::continue_panic_fmt
at libstd/panicking.rs:390
6: std::panicking::try::do_call
at libstd/panicking.rs:345
7: integrations::raftstore::test_prevote::test_prevote
at tests/integrations/raftstore/test_prevote.rs:90
8: integrations::raftstore::test_prevote::test_prevote_reboot_minority_followers
at tests/integrations/raftstore/test_prevote.rs:212
9: integrations::__test::TESTS::{{closure}}
at tests/integrations/raftstore/test_prevote.rs:209
10: core::ops::function::FnOnce::call_once
at /Users/travis/build/rust-lang/rust/src/libcore/ops/function.rs:223
11: <F as alloc::boxed::FnBox<A>>::call_box
at libtest/lib.rs:1454
at /Users/travis/build/rust-lang/rust/src/libcore/ops/function.rs:223
at /Users/travis/build/rust-lang/rust/src/liballoc/boxed.rs:640
12: panic_unwind::dwarf::eh::read_encoded_pointer
at libpanic_unwind/lib.rs:106
``` | 1.0 | test_prevote_reboot_minority_followers failed - ## Bug Report
**What version of Rust are you using?**
rustc 1.29.0-nightly (4f3c7a472 2018-07-17)
**What operating system and CPU are you using?**
MacOS
**What did you do?**
run `make dev`
**What did you expect to see?**
test failed
**What did you see instead?**
```
thread 'raftstore::test_prevote::test_prevote_reboot_minority_followers' panicked at 'assertion failed: `(left == right)`
left: `true`,
right: `false`: Sends a PreVote or PreVoteResponse during failure.', tests/integrations/raftstore/test_prevote.rs:90:9
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:227
4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
at libstd/panicking.rs:475
5: std::panicking::continue_panic_fmt
at libstd/panicking.rs:390
6: std::panicking::try::do_call
at libstd/panicking.rs:345
7: integrations::raftstore::test_prevote::test_prevote
at tests/integrations/raftstore/test_prevote.rs:90
8: integrations::raftstore::test_prevote::test_prevote_reboot_minority_followers
at tests/integrations/raftstore/test_prevote.rs:212
9: integrations::__test::TESTS::{{closure}}
at tests/integrations/raftstore/test_prevote.rs:209
10: core::ops::function::FnOnce::call_once
at /Users/travis/build/rust-lang/rust/src/libcore/ops/function.rs:223
11: <F as alloc::boxed::FnBox<A>>::call_box
at libtest/lib.rs:1454
at /Users/travis/build/rust-lang/rust/src/libcore/ops/function.rs:223
at /Users/travis/build/rust-lang/rust/src/liballoc/boxed.rs:640
12: panic_unwind::dwarf::eh::read_encoded_pointer
at libpanic_unwind/lib.rs:106
``` | priority | test prevote reboot minority followers failed bug report what version of rust are you using rustc nightly what operating system and cpu are you using macos what did you do run make dev what did you expect to see test failed what did you see instead thread raftstore test prevote test prevote reboot minority followers panicked at assertion failed left right left true right false sends a prevote or prevoteresponse during failure tests integrations raftstore test prevote rs note some details are omitted run with rust backtrace full for a verbose backtrace stack backtrace std sys unix backtrace tracing imp unwind backtrace at libstd sys unix backtrace tracing gcc s rs std sys common backtrace print at libstd sys common backtrace rs at libstd sys common backtrace rs std panicking default hook closure at libstd panicking rs std panicking default hook at libstd panicking rs as core panic boxmeup get at libstd panicking rs std panicking continue panic fmt at libstd panicking rs std panicking try do call at libstd panicking rs integrations raftstore test prevote test prevote at tests integrations raftstore test prevote rs integrations raftstore test prevote test prevote reboot minority followers at tests integrations raftstore test prevote rs integrations test tests closure at tests integrations raftstore test prevote rs core ops function fnonce call once at users travis build rust lang rust src libcore ops function rs call box at libtest lib rs at users travis build rust lang rust src libcore ops function rs at users travis build rust lang rust src liballoc boxed rs panic unwind dwarf eh read encoded pointer at libpanic unwind lib rs | 1 |
51,885 | 3,015,127,508 | IssuesEvent | 2015-07-29 18:04:39 | Person8880/Shine | https://api.github.com/repos/Person8880/Shine | closed | 275 Compatibility | High Priority Investigate | I have no idea what is going to break, but I can guess the chatbox is going to be different to the chat outside of it because of the rookie/commander tags thing. | 1.0 | 275 Compatibility - I have no idea what is going to break, but I can guess the chatbox is going to be different to the chat outside of it because of the rookie/commander tags thing. | priority | compatibility i have no idea what is going to break but i can guess the chatbox is going to be different to the chat outside of it because of the rookie commander tags thing | 1 |
657,594 | 21,797,517,206 | IssuesEvent | 2022-05-15 21:04:21 | KinsonDigital/NugetVersionChecker | https://api.github.com/repos/KinsonDigital/NugetVersionChecker | opened | 🚧Convert to NET action and make sure it works | high priority | ### I have done the items below . . .
- [X] I have created a title but left the 🚧 emoji behind.
### Description
Convert the project to a NET project.
### Acceptance Criteria
**This issue is finished when:**
- [ ] Project converted
- [ ] Make sure that the action works.
- [ ] Unit tests created
### ToDo Items
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [X] Issue linked to the proper project
### Issue Dependencies
- #3
### Related Work
_No response_ | 1.0 | 🚧Convert to NET action and make sure it works - ### I have done the items below . . .
- [X] I have created a title but left the 🚧 emoji behind.
### Description
Convert the project to a NET project.
### Acceptance Criteria
**This issue is finished when:**
- [ ] Project converted
- [ ] Make sure that the action works.
- [ ] Unit tests created
### ToDo Items
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [X] Issue linked to the proper project
### Issue Dependencies
- #3
### Related Work
_No response_ | priority | 🚧convert to net action and make sure it works i have done the items below i have created a title but left the 🚧 emoji behind description convert the project to a net project acceptance criteria this issue is finished when project converted make sure that the action works unit tests created todo items priority label added to issue low priority medium priority or high priority issue linked to the proper project issue dependencies related work no response | 1 |
787,961 | 27,737,270,782 | IssuesEvent | 2023-03-15 12:07:10 | fractal-analytics-platform/fractal-tasks-core | https://api.github.com/repos/fractal-analytics-platform/fractal-tasks-core | closed | Establish ROIs for organoids | High Priority | There are 4 cases for how organoids can map to FOVs and how there ROIs can relate to each other. Here is an overview of the 4. Fractal should cover cases # 1 - # 3, # 4 is out of scope.

### 1) No ROI overlap, no FOVs crossing
The simplest case. ROIs are all within the FOVs. Potentially, there may be some FOVs that do not contain ROIs. Easy to handle: Some task segments the organoids and we create new bounding-box ROIs based on those segmentations. When processing those ROIs, no special care needs to be taken and just looping over ROIs will make the processing more efficient as the non-organoid regions are not being processed.
(Will cover most use-cases of @MaksHess and many people in the Pelkmans lab)
### 2: No ROI overlap, but ROIs cross FOVs
Slightly more complex case where the ROIs cross boundaries of the field of views. As long as we cover the stitching of ROIs (i.e. we were in modality 1 or modality 2 [when precision is not critical] of this overview: https://github.com/fractal-analytics-platform/fractal-tasks-core/issues/11), we can process them like in Case # 1.
### 3: ROI overlap, but not organoid overlap
This is the trickiest case we will need to handle. Realistically, we will be using bounding-box ROIs for a while (separate discussion whether eventually more complex ROIs could be defined, but that would index-based processing with dask (see: https://github.com/fractal-analytics-platform/fractal-tasks-core/issues/27) make even more complex). Thus, a bounding-box can overlap while the object within will not overlap. This is quite likely to happen for some use cases in the Liberali lab when organoids grow densely.
How do we tackle this? Let's look into masked arrays in dask: https://docs.dask.org/en/stable/generated/dask.array.ma.masked_where.html Maybe one would specify the label image & the label value (e.g. path to label image, integer value of relevant label) as additional data for the ROI and, if a ROI contains `mask_label_img` and `mask_label_value`, it uses such masking before reading and writing data to/from a ROI?
### 4: Organoid overlap
This is out of scope. It could happen when the MIP is processed of organoids that are close in 3D. In that case, our segmentation networks would also assign an MIP pixel to one organoid only. Thus, if a user wants to process this in more detail, organoid segmentation should be done in 3D.
Other cases of where a 3D voxel should actually belong to multiple objects are out of scope for Fractal | 1.0 | Establish ROIs for organoids - There are 4 cases for how organoids can map to FOVs and how there ROIs can relate to each other. Here is an overview of the 4. Fractal should cover cases # 1 - # 3, # 4 is out of scope.

### 1) No ROI overlap, no FOVs crossing
The simplest case. ROIs are all within the FOVs. Potentially, there may be some FOVs that do not contain ROIs. Easy to handle: Some task segments the organoids and we create new bounding-box ROIs based on those segmentations. When processing those ROIs, no special care needs to be taken and just looping over ROIs will make the processing more efficient as the non-organoid regions are not being processed.
(Will cover most use-cases of @MaksHess and many people in the Pelkmans lab)
### 2: No ROI overlap, but ROIs cross FOVs
Slightly more complex case where the ROIs cross boundaries of the field of views. As long as we cover the stitching of ROIs (i.e. we were in modality 1 or modality 2 [when precision is not critical] of this overview: https://github.com/fractal-analytics-platform/fractal-tasks-core/issues/11), we can process them like in Case # 1.
### 3: ROI overlap, but not organoid overlap
This is the trickiest case we will need to handle. Realistically, we will be using bounding-box ROIs for a while (separate discussion whether eventually more complex ROIs could be defined, but that would index-based processing with dask (see: https://github.com/fractal-analytics-platform/fractal-tasks-core/issues/27) make even more complex). Thus, a bounding-box can overlap while the object within will not overlap. This is quite likely to happen for some use cases in the Liberali lab when organoids grow densely.
How do we tackle this? Let's look into masked arrays in dask: https://docs.dask.org/en/stable/generated/dask.array.ma.masked_where.html Maybe one would specify the label image & the label value (e.g. path to label image, integer value of relevant label) as additional data for the ROI and, if a ROI contains `mask_label_img` and `mask_label_value`, it uses such masking before reading and writing data to/from a ROI?
### 4: Organoid overlap
This is out of scope. It could happen when the MIP is processed of organoids that are close in 3D. In that case, our segmentation networks would also assign an MIP pixel to one organoid only. Thus, if a user wants to process this in more detail, organoid segmentation should be done in 3D.
Other cases of where a 3D voxel should actually belong to multiple objects are out of scope for Fractal | priority | establish rois for organoids there are cases for how organoids can map to fovs and how there rois can relate to each other here is an overview of the fractal should cover cases is out of scope no roi overlap no fovs crossing the simplest case rois are all within the fovs potentially there may be some fovs that do not contain rois easy to handle some task segments the organoids and we create new bounding box rois based on those segmentations when processing those rois no special care needs to be taken and just looping over rois will make the processing more efficient as the non organoid regions are not being processed will cover most use cases of makshess and many people in the pelkmans lab no roi overlap but rois cross fovs slightly more complex case where the rois cross boundaries of the field of views as long as we cover the stitching of rois i e we were in modality or modality of this overview we can process them like in case roi overlap but not organoid overlap this is the trickiest case we will need to handle realistically we will be using bounding box rois for a while separate discussion whether eventually more complex rois could be defined but that would index based processing with dask see make even more complex thus a bounding box can overlap while the object within will not overlap this is quite likely to happen for some use cases in the liberali lab when organoids grow densely how do we tackle this let s look into masked arrays in dask maybe one would specify the label image the label value e g path to label image integer value of relevant label as additional data for the roi and if a roi contains mask label img and mask label value it uses such masking before reading and writing data to from a roi organoid overlap this is out of scope it could happen when the mip is processed of organoids that are close in in that case our segmentation networks would also assign an mip pixel to one organoid only thus if a user wants to process this in more detail organoid segmentation should be done in other cases of where a voxel should actually belong to multiple objects are out of scope for fractal | 1 |
488,040 | 14,073,883,456 | IssuesEvent | 2020-11-04 06:06:21 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | support.mozilla.org - see bug description | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-important | <!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61028 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/en-US/kb/whats-new-firefox-android?as=u&utm_source=inproduct&redirectslug=whats-new-firefox-android-79&redirectlocale=en-US
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: Yes Opera
**Problem type**: Something else
**Description**: dark mode add-on doesn't make this page dark
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/c7c43417-3fef-4766-b8d4-03f7077611b5.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201101092255</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/6457b285-bc12-4663-86e9-51c57daec017)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | support.mozilla.org - see bug description - <!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61028 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/en-US/kb/whats-new-firefox-android?as=u&utm_source=inproduct&redirectslug=whats-new-firefox-android-79&redirectlocale=en-US
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: Yes Opera
**Problem type**: Something else
**Description**: dark mode add-on doesn't make this page dark
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/c7c43417-3fef-4766-b8d4-03f7077611b5.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201101092255</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/6457b285-bc12-4663-86e9-51c57daec017)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | support mozilla org see bug description url browser version firefox mobile operating system android tested another browser yes opera problem type something else description dark mode add on doesn t make this page dark steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
746,885 | 26,049,668,745 | IssuesEvent | 2022-12-22 17:21:25 | KatsuteDev/Background | https://api.github.com/repos/KatsuteDev/Background | closed | Extreme lag when setting folder for editor backgrounds using VS Code Insiders | critical bug high priority | ### Operating System
Windows 10
### VSCode Version
1.75.0-insider
### Extension Version
2.3.1
### Issue
When using the background config menu, and setting a folder as the glob for background in the "Editor", VS Code takes a long time to load, and using a low wallpaper change time cause huge lags. This doesn't happen while setting the background for "Window" or when you use a single file as background.
PS: While using a small Time parameter while setting a folder as glob in the "Window" setting causes a very small CPU spike, it's completely doable, and the loading works as expected.
### How to replicate
Open background menu > choose Editor > select folder as glob > Reload editor.
Your editor should take longer to start.
Additionally, if you set the "Time" parameter to something like 5-10 seconds, reverting this change becomes almost impossible, as the editor is constantly lagging when changing backgrounds.
I believe this has something to do with the actual wallpaper load, because it only happens when the file is being loaded (and with a low Time, it's loaded constantly), after that the performance is normalized. | 1.0 | Extreme lag when setting folder for editor backgrounds using VS Code Insiders - ### Operating System
Windows 10
### VSCode Version
1.75.0-insider
### Extension Version
2.3.1
### Issue
When using the background config menu, and setting a folder as the glob for background in the "Editor", VS Code takes a long time to load, and using a low wallpaper change time cause huge lags. This doesn't happen while setting the background for "Window" or when you use a single file as background.
PS: While using a small Time parameter while setting a folder as glob in the "Window" setting causes a very small CPU spike, it's completely doable, and the loading works as expected.
### How to replicate
Open background menu > choose Editor > select folder as glob > Reload editor.
Your editor should take longer to start.
Additionally, if you set the "Time" parameter to something like 5-10 seconds, reverting this change becomes almost impossible, as the editor is constantly lagging when changing backgrounds.
I believe this has something to do with the actual wallpaper load, because it only happens when the file is being loaded (and with a low Time, it's loaded constantly), after that the performance is normalized. | priority | extreme lag when setting folder for editor backgrounds using vs code insiders operating system windows vscode version insider extension version issue when using the background config menu and setting a folder as the glob for background in the editor vs code takes a long time to load and using a low wallpaper change time cause huge lags this doesn t happen while setting the background for window or when you use a single file as background ps while using a small time parameter while setting a folder as glob in the window setting causes a very small cpu spike it s completely doable and the loading works as expected how to replicate open background menu choose editor select folder as glob reload editor your editor should take longer to start additionally if you set the time parameter to something like seconds reverting this change becomes almost impossible as the editor is constantly lagging when changing backgrounds i believe this has something to do with the actual wallpaper load because it only happens when the file is being loaded and with a low time it s loaded constantly after that the performance is normalized | 1 |
669,550 | 22,630,552,397 | IssuesEvent | 2022-06-30 14:20:07 | heading1/WYLSBingsu | https://api.github.com/repos/heading1/WYLSBingsu | closed | [BE] 로그인 API 수정 - res에 userId추가 | ⚙️ Backend ❗️high-priority 🔧 enhancement | ## 🔨 기능 설명
로그인 API 수정 - res에 userId추가
## 📑 완료 조건
문제없이 userId 추가될 때
## 💭 관련 백로그
[대분류]-[중분류]-[소분류]
## 💭 예상 작업 시간
0.5h
| 1.0 | [BE] 로그인 API 수정 - res에 userId추가 - ## 🔨 기능 설명
로그인 API 수정 - res에 userId추가
## 📑 완료 조건
문제없이 userId 추가될 때
## 💭 관련 백로그
[대분류]-[중분류]-[소분류]
## 💭 예상 작업 시간
0.5h
| priority | 로그인 api 수정 res에 userid추가 🔨 기능 설명 로그인 api 수정 res에 userid추가 📑 완료 조건 문제없이 userid 추가될 때 💭 관련 백로그 💭 예상 작업 시간 | 1 |
793,420 | 27,995,774,732 | IssuesEvent | 2023-03-27 08:21:54 | AY2223S2-CS2103T-W10-2/tp | https://api.github.com/repos/AY2223S2-CS2103T-W10-2/tp | closed | Define priority when creating Tasks, and ability to edit priority. | type.Task priority.High | There's currently no command to set or change a todo/event's priority. | 1.0 | Define priority when creating Tasks, and ability to edit priority. - There's currently no command to set or change a todo/event's priority. | priority | define priority when creating tasks and ability to edit priority there s currently no command to set or change a todo event s priority | 1 |
47,070 | 2,971,952,507 | IssuesEvent | 2015-07-14 10:35:29 | transientskp/tkp | https://api.github.com/repos/transientskp/tkp | closed | Source Association Problem | bug priority high | There appears to be a bug in source association in Release 2.0. Sources do not appear to be always associated correctly by Release 2.0 and this is causing major issues with TraP use on large datasets.
I have identified an example of this in action in one of my datasets. The three sources in this issue should all be the same source (using de Ruiter radius). There is one relatively constant runcat source (multiple detections, id 33589) with two "new sources". The two new sources listed below are only detected once each and hence the two runcat positions remain constant after detection. One of the two new sources is detected when the runcat source 33589 is not blindly detected in the image - so should have been associated. The other, 35391, should be a 1-to-many source association type but this does not seem to be the case in the catalogue.
The database details and Banana URLs for the relevant sources have been pasted into HipChat. The de Ruiter radius was set to ~5.6 in the job parameters file.
- Runcat source, 33589, detected 468 times. Position (354.039°, -26.551°) ± (28.941″, 25.889″)
- Runcat source, 34809, detected once in image 12060 (33589 was not detected). Position (353.942°, -26.554°) ± (626.074″, 560.032″). de Ruiter radius from source 33589 (calculated manually) 0.55751549. Expected to have been a 1-to-1 association with 33589.
- Runcat source, 35391, detected once in image 12381 (33589 was detected in same image). Position (353.897°, -26.559°) ± (626.173″, 560.095″). de Ruiter radius from source 33589 (calculated manually) 0.817202247. Expected to have been a 1-to-many association with 33589, but does not appear to be recorded as such in the assocxtrsource table.
There are 2 faint sources in this region of the image; 1 easily detected - source 33589 - and a probably confusing source a small distance away, likely causing the sources 34809 and 35391. However, the de Ruiter radius calculations show that these sources should be associated (as it doesn't care about the actual image properties, just the source positions and errors).
Why were these sources not associated? Caused by the runcat position of 33589 at the time of source association step (I think unlikely due to the big systematic error inserted in the job parameters to force the association) or a bug in the code? | 1.0 | Source Association Problem - There appears to be a bug in source association in Release 2.0. Sources do not appear to be always associated correctly by Release 2.0 and this is causing major issues with TraP use on large datasets.
I have identified an example of this in action in one of my datasets. The three sources in this issue should all be the same source (using de Ruiter radius). There is one relatively constant runcat source (multiple detections, id 33589) with two "new sources". The two new sources listed below are only detected once each and hence the two runcat positions remain constant after detection. One of the two new sources is detected when the runcat source 33589 is not blindly detected in the image - so should have been associated. The other, 35391, should be a 1-to-many source association type but this does not seem to be the case in the catalogue.
The database details and Banana URLs for the relevant sources have been pasted into HipChat. The de Ruiter radius was set to ~5.6 in the job parameters file.
- Runcat source, 33589, detected 468 times. Position (354.039°, -26.551°) ± (28.941″, 25.889″)
- Runcat source, 34809, detected once in image 12060 (33589 was not detected). Position (353.942°, -26.554°) ± (626.074″, 560.032″). de Ruiter radius from source 33589 (calculated manually) 0.55751549. Expected to have been a 1-to-1 association with 33589.
- Runcat source, 35391, detected once in image 12381 (33589 was detected in same image). Position (353.897°, -26.559°) ± (626.173″, 560.095″). de Ruiter radius from source 33589 (calculated manually) 0.817202247. Expected to have been a 1-to-many association with 33589, but does not appear to be recorded as such in the assocxtrsource table.
There are 2 faint sources in this region of the image; 1 easily detected - source 33589 - and a probably confusing source a small distance away, likely causing the sources 34809 and 35391. However, the de Ruiter radius calculations show that these sources should be associated (as it doesn't care about the actual image properties, just the source positions and errors).
Why were these sources not associated? Caused by the runcat position of 33589 at the time of source association step (I think unlikely due to the big systematic error inserted in the job parameters to force the association) or a bug in the code? | priority | source association problem there appears to be a bug in source association in release sources do not appear to be always associated correctly by release and this is causing major issues with trap use on large datasets i have identified an example of this in action in one of my datasets the three sources in this issue should all be the same source using de ruiter radius there is one relatively constant runcat source multiple detections id with two new sources the two new sources listed below are only detected once each and hence the two runcat positions remain constant after detection one of the two new sources is detected when the runcat source is not blindly detected in the image so should have been associated the other should be a to many source association type but this does not seem to be the case in the catalogue the database details and banana urls for the relevant sources have been pasted into hipchat the de ruiter radius was set to in the job parameters file runcat source detected times position ° ° ± ″ ″ runcat source detected once in image was not detected position ° ° ± ″ ″ de ruiter radius from source calculated manually expected to have been a to association with runcat source detected once in image was detected in same image position ° ° ± ″ ″ de ruiter radius from source calculated manually expected to have been a to many association with but does not appear to be recorded as such in the assocxtrsource table there are faint sources in this region of the image easily detected source and a probably confusing source a small distance away likely causing the sources and however the de ruiter radius calculations show that these sources should be associated as it doesn t care about the actual image properties just the source positions and errors why were these sources not associated caused by the runcat position of at the time of source association step i think unlikely due to the big systematic error inserted in the job parameters to force the association or a bug in the code | 1 |
161,503 | 6,130,645,472 | IssuesEvent | 2017-06-24 07:29:31 | Supadog/DB_iti | https://api.github.com/repos/Supadog/DB_iti | closed | Transcript: Multi-Programme Student | Half done High Priority | - Add notification when a student has been enrolled in multiple programmes
- <del>add programme name beside semester name. </del> | 1.0 | Transcript: Multi-Programme Student - - Add notification when a student has been enrolled in multiple programmes
- <del>add programme name beside semester name. </del> | priority | transcript multi programme student add notification when a student has been enrolled in multiple programmes add programme name beside semester name | 1 |
677,193 | 23,154,350,308 | IssuesEvent | 2022-07-29 11:30:24 | Yamato-Security/hayabusa | https://api.github.com/repos/Yamato-Security/hayabusa | closed | Enhancement: Make startswith, endswith, contains case-insensitive | enhancement Priority:High | Hayabusa Readme:
<img width="1024" alt="Screen Shot 2022-07-25 at 13 47 16" src="https://user-images.githubusercontent.com/71482215/180700853-0b67d9fa-2819-4c66-861a-afcc96fd9eeb.png">
`|startswith`, `|endswith`, `|contains`は大文字小文字を区別するようにしていますが、Sigma Wikiを再確認したら、区別しないのが正しいっぽいです。
<img width="675" alt="Screen Shot 2022-07-25 at 13 45 59" src="https://user-images.githubusercontent.com/71482215/180700999-ad683c65-daaa-4b8a-b6b2-2265cb206d87.png">
https://github.com/SigmaHQ/sigma/wiki/Specification
区別してしまうと、簡単にルールをバイパスできてしまうので、この3つを大文字小文字を区別しないようにしたいです。
`|re`と`|equalsfield`は今のままで大丈夫です。
| 1.0 | Enhancement: Make startswith, endswith, contains case-insensitive - Hayabusa Readme:
<img width="1024" alt="Screen Shot 2022-07-25 at 13 47 16" src="https://user-images.githubusercontent.com/71482215/180700853-0b67d9fa-2819-4c66-861a-afcc96fd9eeb.png">
`|startswith`, `|endswith`, `|contains`は大文字小文字を区別するようにしていますが、Sigma Wikiを再確認したら、区別しないのが正しいっぽいです。
<img width="675" alt="Screen Shot 2022-07-25 at 13 45 59" src="https://user-images.githubusercontent.com/71482215/180700999-ad683c65-daaa-4b8a-b6b2-2265cb206d87.png">
https://github.com/SigmaHQ/sigma/wiki/Specification
区別してしまうと、簡単にルールをバイパスできてしまうので、この3つを大文字小文字を区別しないようにしたいです。
`|re`と`|equalsfield`は今のままで大丈夫です。
| priority | enhancement make startswith endswith contains case insensitive hayabusa readme img width alt screen shot at src startswith endswith contains は大文字小文字を区別するようにしていますが、sigma wikiを再確認したら、区別しないのが正しいっぽいです。 img width alt screen shot at src 区別してしまうと、簡単にルールをバイパスできてしまうので、 。 re と equalsfield は今のままで大丈夫です。 | 1 |
70,079 | 3,316,975,499 | IssuesEvent | 2015-11-06 19:25:19 | MetropolitanTransportationCommission/vpp-webapp | https://api.github.com/repos/MetropolitanTransportationCommission/vpp-webapp | opened | Move db passwords to config file | Enhancement Request High Priority | Mike and I were discussing this and we'd like to implement this change now.
I found the following thread about this:
http://stackoverflow.com/questions/2397822/what-is-the-best-practice-for-dealing-with-passwords-in-github
| 1.0 | Move db passwords to config file - Mike and I were discussing this and we'd like to implement this change now.
I found the following thread about this:
http://stackoverflow.com/questions/2397822/what-is-the-best-practice-for-dealing-with-passwords-in-github
| priority | move db passwords to config file mike and i were discussing this and we d like to implement this change now i found the following thread about this | 1 |
407,471 | 11,914,004,520 | IssuesEvent | 2020-03-31 12:57:37 | D0019208/Service-Loop-Server | https://api.github.com/repos/D0019208/Service-Loop-Server | closed | Check if room that tutorial is scheduled to happen in is booked. | high priority | When a tutor is creating the draft agreement, the system should check if the room is already booked. | 1.0 | Check if room that tutorial is scheduled to happen in is booked. - When a tutor is creating the draft agreement, the system should check if the room is already booked. | priority | check if room that tutorial is scheduled to happen in is booked when a tutor is creating the draft agreement the system should check if the room is already booked | 1 |
225,585 | 7,488,801,388 | IssuesEvent | 2018-04-06 03:49:21 | EvictionLab/eviction-lab-website | https://api.github.com/repos/EvictionLab/eviction-lab-website | closed | Pictorial menu problems on mobile | high priority | I am seeing some items missing from the pictorial menu on mobile on staging. Maps & Data doesn't show, and the second part of the middle row doesn't show either.

| 1.0 | Pictorial menu problems on mobile - I am seeing some items missing from the pictorial menu on mobile on staging. Maps & Data doesn't show, and the second part of the middle row doesn't show either.

| priority | pictorial menu problems on mobile i am seeing some items missing from the pictorial menu on mobile on staging maps data doesn t show and the second part of the middle row doesn t show either | 1 |
550,854 | 16,133,609,825 | IssuesEvent | 2021-04-29 08:55:35 | microsoft/STL | https://api.github.com/repos/microsoft/STL | opened | `<cmath>`: New intrinsics broke CUDA | bug high priority | When #1336 added usage of new intrinsics to `<cmath>`, we forgot about CUDA. :scream_cat: (Long ago, we broke CUDA by adding new type traits intrinsics in C++14 mode, but these math functions were just different enough that I didn't remember the interaction. Oops!)
I'm not exactly sure why our CUDA unit test didn't catch this, given that the affected overloads aren't templates:
https://github.com/microsoft/STL/blob/f675d68f03cfb7a303cd5408502f2642947d32b7/stl/inc/cmath#L62-L70
We are including `<cmath>`, so there's probably some detail of the CUDA compilation process that I don't understand:
https://github.com/microsoft/STL/blob/f675d68f03cfb7a303cd5408502f2642947d32b7/stl/inc/__msvc_all_public_headers.hpp#L156
https://github.com/microsoft/STL/blob/f675d68f03cfb7a303cd5408502f2642947d32b7/tests/std/tests/GH_000639_nvcc_include_all/test.compile.pass.cpp#L4-L6
In any event, fixing this should be easy, we just need to backport it to VS 2019 16.9.
Originally encountered in https://github.com/pytorch/pytorch/issues/54382 and tracked by Microsoft-internal VSO-1314894 / AB#1314894 . | 1.0 | `<cmath>`: New intrinsics broke CUDA - When #1336 added usage of new intrinsics to `<cmath>`, we forgot about CUDA. :scream_cat: (Long ago, we broke CUDA by adding new type traits intrinsics in C++14 mode, but these math functions were just different enough that I didn't remember the interaction. Oops!)
I'm not exactly sure why our CUDA unit test didn't catch this, given that the affected overloads aren't templates:
https://github.com/microsoft/STL/blob/f675d68f03cfb7a303cd5408502f2642947d32b7/stl/inc/cmath#L62-L70
We are including `<cmath>`, so there's probably some detail of the CUDA compilation process that I don't understand:
https://github.com/microsoft/STL/blob/f675d68f03cfb7a303cd5408502f2642947d32b7/stl/inc/__msvc_all_public_headers.hpp#L156
https://github.com/microsoft/STL/blob/f675d68f03cfb7a303cd5408502f2642947d32b7/tests/std/tests/GH_000639_nvcc_include_all/test.compile.pass.cpp#L4-L6
In any event, fixing this should be easy, we just need to backport it to VS 2019 16.9.
Originally encountered in https://github.com/pytorch/pytorch/issues/54382 and tracked by Microsoft-internal VSO-1314894 / AB#1314894 . | priority | new intrinsics broke cuda when added usage of new intrinsics to we forgot about cuda scream cat long ago we broke cuda by adding new type traits intrinsics in c mode but these math functions were just different enough that i didn t remember the interaction oops i m not exactly sure why our cuda unit test didn t catch this given that the affected overloads aren t templates we are including so there s probably some detail of the cuda compilation process that i don t understand in any event fixing this should be easy we just need to backport it to vs originally encountered in and tracked by microsoft internal vso ab | 1 |
446,084 | 12,839,252,981 | IssuesEvent | 2020-07-07 18:59:02 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | 'creating a relation sets wrong expression in "display expression" and breaks the "select features by value" functionality | Bug High Priority | I created a `1:m` relationship between two polygonal vector layers (_I attach data and project_),

doing a search by value using the **F3** function key (selection by value / select feature) on the parent layer, I get a red error message.
```
The relationship is invalid. Check that the relationship definitions are correct
```

and strange, because the **COD_REG** field is `NULL`
In reality, the relationship is correct because by opening the attribute table (parent layer) in module mode, everything works well:

---
## DATA AND PROJECT
[BUG_selectbyrelationship.zip](https://github.com/qgis/QGIS/files/4186674/BUG_selectbyrelationship.zip)
---
## the problem occurs in QGIS 3.4.15, 3.10.2-2 and master ` with clean profile !!!`
---
```
Versione di QGIS 3.10.2-A Coruña
Revisione codice QGIS d4cd3cfe5a
Compilato con Qt 5.11.2
Esecuzione con Qt 5.11.2
Compilato con GDAL/OGR 3.0.3
Esecuzione con GDAL/OGR 3.0.4
Compilato con GEOS 3.8.0-CAPI-1.13.1
Esecuzione con GEOS 3.8.0-CAPI-1.13.1
Compiled against SQLite 3.29.0
Running against SQLite 3.29.0
Versione client PostgreSQL 11.5
Versione SpatiaLite 4.3.0
Versione QWT 6.1.3
Versione QScintilla2 2.10.8
Compiled against PROJ 6.3.0
Running against PROJ Rel. 6.3.0, January 1st, 2020
OS Version Windows 10 (10.0)
```
### OSGeo4W64 | 1.0 | 'creating a relation sets wrong expression in "display expression" and breaks the "select features by value" functionality - I created a `1:m` relationship between two polygonal vector layers (_I attach data and project_),

doing a search by value using the **F3** function key (selection by value / select feature) on the parent layer, I get a red error message.
```
The relationship is invalid. Check that the relationship definitions are correct
```

and strange, because the **COD_REG** field is `NULL`
In reality, the relationship is correct because by opening the attribute table (parent layer) in module mode, everything works well:

---
## DATA AND PROJECT
[BUG_selectbyrelationship.zip](https://github.com/qgis/QGIS/files/4186674/BUG_selectbyrelationship.zip)
---
## the problem occurs in QGIS 3.4.15, 3.10.2-2 and master ` with clean profile !!!`
---
```
Versione di QGIS 3.10.2-A Coruña
Revisione codice QGIS d4cd3cfe5a
Compilato con Qt 5.11.2
Esecuzione con Qt 5.11.2
Compilato con GDAL/OGR 3.0.3
Esecuzione con GDAL/OGR 3.0.4
Compilato con GEOS 3.8.0-CAPI-1.13.1
Esecuzione con GEOS 3.8.0-CAPI-1.13.1
Compiled against SQLite 3.29.0
Running against SQLite 3.29.0
Versione client PostgreSQL 11.5
Versione SpatiaLite 4.3.0
Versione QWT 6.1.3
Versione QScintilla2 2.10.8
Compiled against PROJ 6.3.0
Running against PROJ Rel. 6.3.0, January 1st, 2020
OS Version Windows 10 (10.0)
```
### OSGeo4W64 | priority | creating a relation sets wrong expression in display expression and breaks the select features by value functionality i created a m relationship between two polygonal vector layers i attach data and project doing a search by value using the function key selection by value select feature on the parent layer i get a red error message the relationship is invalid check that the relationship definitions are correct and strange because the cod reg field is null in reality the relationship is correct because by opening the attribute table parent layer in module mode everything works well data and project the problem occurs in qgis and master with clean profile versione di qgis a coruña revisione codice qgis compilato con qt esecuzione con qt compilato con gdal ogr esecuzione con gdal ogr compilato con geos capi esecuzione con geos capi compiled against sqlite running against sqlite versione client postgresql versione spatialite versione qwt versione compiled against proj running against proj rel january os version windows | 1 |
519,261 | 15,048,446,853 | IssuesEvent | 2021-02-03 10:11:47 | mirin-1024/monoge | https://api.github.com/repos/mirin-1024/monoge | reopened | [WIP]本番環境のパフォーマンスチューニング | priority: high | ## 概要
### what
PageSpeed Insightsを利用した本番アプリケーションのパフォーマンスチューニング
### why
よりレスポンスの良いアプリケーション作成のため
## 変更
- [x] nginxのgzip圧縮を有効にする
- [x] JavaScript非同期読み込み設定の追加
## 追加タスク
- [x] 本番環境でAWS S3に画像を保存する設定
### 関連課題
### 親課題
#### 備考
| 1.0 | [WIP]本番環境のパフォーマンスチューニング - ## 概要
### what
PageSpeed Insightsを利用した本番アプリケーションのパフォーマンスチューニング
### why
よりレスポンスの良いアプリケーション作成のため
## 変更
- [x] nginxのgzip圧縮を有効にする
- [x] JavaScript非同期読み込み設定の追加
## 追加タスク
- [x] 本番環境でAWS S3に画像を保存する設定
### 関連課題
### 親課題
#### 備考
| priority | 本番環境のパフォーマンスチューニング 概要 what pagespeed insightsを利用した本番アプリケーションのパフォーマンスチューニング why よりレスポンスの良いアプリケーション作成のため 変更 nginxのgzip圧縮を有効にする javascript非同期読み込み設定の追加 追加タスク 本番環境でaws 関連課題 親課題 備考 | 1 |
33,117 | 2,762,832,940 | IssuesEvent | 2015-04-29 02:39:50 | bigtester/automation-test-engine | https://api.github.com/repos/bigtester/automation-test-engine | closed | need to give the supported browsers version number on wiki | high priority | need to put the supported browser version in number 3 section of page, https://github.com/bigtester/automation-test-engine/wiki/I.a-Features | 1.0 | need to give the supported browsers version number on wiki - need to put the supported browser version in number 3 section of page, https://github.com/bigtester/automation-test-engine/wiki/I.a-Features | priority | need to give the supported browsers version number on wiki need to put the supported browser version in number section of page | 1 |
60,732 | 3,133,430,558 | IssuesEvent | 2015-09-10 01:29:35 | washingtontrails/vms | https://api.github.com/repos/washingtontrails/vms | closed | Missing fields for work party on Salesforce. | High Priority Salesforce VMS BUDGET | Turns out the description field can be longer than what is shown on the iteration page in ScrumDo, so we missed some fields that need to be added. This ticket is for adding them to Salesforce.
- Before/After WP Location - Header 2, move this to the Location Tab so directions to the WP as well as any Before/After event are display and printed together.
- Work Party Schedule - Header 2
- Planning information - Header 1
- What it takes to do this work party - Header 2
- What to Wear - Header 2
- What to Bring - Header 2
- More Info - Header 1, note links to FAQ, Trail Work Guide webpages. Also link to Hiking Guide and Trip Reports about this area (this is based on the Hiking Guide URL for the work party).
| 1.0 | Missing fields for work party on Salesforce. - Turns out the description field can be longer than what is shown on the iteration page in ScrumDo, so we missed some fields that need to be added. This ticket is for adding them to Salesforce.
- Before/After WP Location - Header 2, move this to the Location Tab so directions to the WP as well as any Before/After event are display and printed together.
- Work Party Schedule - Header 2
- Planning information - Header 1
- What it takes to do this work party - Header 2
- What to Wear - Header 2
- What to Bring - Header 2
- More Info - Header 1, note links to FAQ, Trail Work Guide webpages. Also link to Hiking Guide and Trip Reports about this area (this is based on the Hiking Guide URL for the work party).
| priority | missing fields for work party on salesforce turns out the description field can be longer than what is shown on the iteration page in scrumdo so we missed some fields that need to be added this ticket is for adding them to salesforce before after wp location header move this to the location tab so directions to the wp as well as any before after event are display and printed together work party schedule header planning information header what it takes to do this work party header what to wear header what to bring header more info header note links to faq trail work guide webpages also link to hiking guide and trip reports about this area this is based on the hiking guide url for the work party | 1 |
562,365 | 16,658,184,374 | IssuesEvent | 2021-06-05 22:45:13 | nlpsandbox/nlpsandbox-client | https://api.github.com/repos/nlpsandbox/nlpsandbox-client | closed | Review example-patient-bundles.json | Priority: High | ### Is your proposal related to a problem?
How will data hosting sites load their data into their own data nodes?
### Describe the solution you'd like
We could potentially provide some code that takes in as input `example-patient-bundles.json` and pushes the evaluation dataset. This is motivated from when I started creating a cli function for storing notes only to realize that you must have a patient stored first prior to creating the note.
### Describe alternatives you've considered
Describe the following workflow (Essentially the example.py file, but we use `example-patient-bundles.json` which may or may not be the format at which other sites)
1. Create dataset
2. Create Fhir Store
3. Create Annotation Store
4. Push Patients
5. Push Notes
6. Push Annotations (Gold standard)
### Additional context
| 1.0 | Review example-patient-bundles.json - ### Is your proposal related to a problem?
How will data hosting sites load their data into their own data nodes?
### Describe the solution you'd like
We could potentially provide some code that takes in as input `example-patient-bundles.json` and pushes the evaluation dataset. This is motivated from when I started creating a cli function for storing notes only to realize that you must have a patient stored first prior to creating the note.
### Describe alternatives you've considered
Describe the following workflow (Essentially the example.py file, but we use `example-patient-bundles.json` which may or may not be the format at which other sites)
1. Create dataset
2. Create Fhir Store
3. Create Annotation Store
4. Push Patients
5. Push Notes
6. Push Annotations (Gold standard)
### Additional context
| priority | review example patient bundles json is your proposal related to a problem how will data hosting sites load their data into their own data nodes describe the solution you d like we could potentially provide some code that takes in as input example patient bundles json and pushes the evaluation dataset this is motivated from when i started creating a cli function for storing notes only to realize that you must have a patient stored first prior to creating the note describe alternatives you ve considered describe the following workflow essentially the example py file but we use example patient bundles json which may or may not be the format at which other sites create dataset create fhir store create annotation store push patients push notes push annotations gold standard additional context | 1 |
754,412 | 26,385,886,785 | IssuesEvent | 2023-01-12 12:14:55 | DwcJava/engine | https://api.github.com/repos/DwcJava/engine | closed | Introduce `HasDestroy` interface | Change: Medium Priority: High Type: Feature | Introduce a new `HasDestroy` interface with two methods [`destroy`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_destroy.htm) and [`isDestroyed`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_isdestroyed.htm) and implement in all controls.
1. the `destroy` method should check internally if a control is already destroyed.
2. If the method is called on a control which has not been attached to panel yet then the creation of the control should be skipped (Track with a flag)
| 1.0 | Introduce `HasDestroy` interface - Introduce a new `HasDestroy` interface with two methods [`destroy`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_destroy.htm) and [`isDestroyed`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_isdestroyed.htm) and implement in all controls.
1. the `destroy` method should check internally if a control is already destroyed.
2. If the method is called on a control which has not been attached to panel yet then the creation of the control should be skipped (Track with a flag)
| priority | introduce hasdestroy interface introduce a new hasdestroy interface with two methods and and implement in all controls the destroy method should check internally if a control is already destroyed if the method is called on a control which has not been attached to panel yet then the creation of the control should be skipped track with a flag | 1 |
463,461 | 13,265,873,472 | IssuesEvent | 2020-08-21 07:26:44 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Bad Sad Error Thrown When Building a Single BAL file | Area/Language Area/jBallerina Priority/High Type/Bug | **Description:**
I got a bad sad error when I try to compile the attached `opportunities.bal` file [opportunities.zip](https://github.com/ballerina-platform/ballerina-lang/files/4994680/opportunities.zip). The log is shown below.
```
[2020-07-29 11:45:43,294] SEVERE {b7a.log.crash} - null
java.lang.NullPointerException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmPackageGen.generateModuleClasses(JvmPackageGen.java:501)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmPackageGen.generate(JvmPackageGen.java:490)
at org.wso2.ballerinalang.compiler.bir.codegen.CodeGenerator.generate(CodeGenerator.java:144)
at org.wso2.ballerinalang.compiler.bir.codegen.CodeGenerator.generate(CodeGenerator.java:118)
at org.wso2.ballerinalang.compiler.CompilerDriver.codeGen(CompilerDriver.java:310)
at org.wso2.ballerinalang.compiler.CompilerDriver.compile(CompilerDriver.java:306)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackageSymbol(CompilerDriver.java:248)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackage(CompilerDriver.java:140)
at org.wso2.ballerinalang.compiler.Compiler.compilePackages(Compiler.java:166)
at org.wso2.ballerinalang.compiler.Compiler.compilePackage(Compiler.java:174)
at org.wso2.ballerinalang.compiler.Compiler.compile(Compiler.java:91)
at org.wso2.ballerinalang.compiler.Compiler.build(Compiler.java:99)
at org.ballerinalang.packerina.task.CompileTask.execute(CompileTask.java:57)
at org.ballerinalang.packerina.TaskExecutor.executeTasks(TaskExecutor.java:38)
at org.ballerinalang.packerina.cmd.BuildCommand.execute(BuildCommand.java:433)
at java.util.Optional.ifPresent(Optional.java:159)
at org.ballerinalang.tool.Main.main(Main.java:57)
Caused by: java.lang.NullPointerException
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genStaticCall(JvmTerminatorGen.java:669)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genFuncCall(JvmTerminatorGen.java:631)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genCall(JvmTerminatorGen.java:614)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genCallTerm(JvmTerminatorGen.java:402)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genTerminator(JvmTerminatorGen.java:252)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmMethodGen.generateBasicBlocks(JvmMethodGen.java:2095)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmMethodGen.genJMethodForBFunc(JvmMethodGen.java:1808)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmMethodGen.generateMethod(JvmMethodGen.java:1658)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmPackageGen.lambda$generateModuleClasses$0(JvmPackageGen.java:550)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1699)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
```
**Steps to reproduce:**
Unzip [opportunities.zip](https://github.com/ballerina-platform/ballerina-lang/files/4994680/opportunities.zip) and build the BAL file using the following command.
`ballerina build opportunities.bal`
**Affected Versions:**
Ballerina Swan Lake Preview 2
**OS, DB, other environment details and versions:**
Ubuntu 18.04, JDK 1.8
**Related Issues (optional):**
N/A
**Suggested Labels (optional):**
N/A
**Suggested Assignees (optional):**
N/A
| 1.0 | Bad Sad Error Thrown When Building a Single BAL file - **Description:**
I got a bad sad error when I try to compile the attached `opportunities.bal` file [opportunities.zip](https://github.com/ballerina-platform/ballerina-lang/files/4994680/opportunities.zip). The log is shown below.
```
[2020-07-29 11:45:43,294] SEVERE {b7a.log.crash} - null
java.lang.NullPointerException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmPackageGen.generateModuleClasses(JvmPackageGen.java:501)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmPackageGen.generate(JvmPackageGen.java:490)
at org.wso2.ballerinalang.compiler.bir.codegen.CodeGenerator.generate(CodeGenerator.java:144)
at org.wso2.ballerinalang.compiler.bir.codegen.CodeGenerator.generate(CodeGenerator.java:118)
at org.wso2.ballerinalang.compiler.CompilerDriver.codeGen(CompilerDriver.java:310)
at org.wso2.ballerinalang.compiler.CompilerDriver.compile(CompilerDriver.java:306)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackageSymbol(CompilerDriver.java:248)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackage(CompilerDriver.java:140)
at org.wso2.ballerinalang.compiler.Compiler.compilePackages(Compiler.java:166)
at org.wso2.ballerinalang.compiler.Compiler.compilePackage(Compiler.java:174)
at org.wso2.ballerinalang.compiler.Compiler.compile(Compiler.java:91)
at org.wso2.ballerinalang.compiler.Compiler.build(Compiler.java:99)
at org.ballerinalang.packerina.task.CompileTask.execute(CompileTask.java:57)
at org.ballerinalang.packerina.TaskExecutor.executeTasks(TaskExecutor.java:38)
at org.ballerinalang.packerina.cmd.BuildCommand.execute(BuildCommand.java:433)
at java.util.Optional.ifPresent(Optional.java:159)
at org.ballerinalang.tool.Main.main(Main.java:57)
Caused by: java.lang.NullPointerException
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genStaticCall(JvmTerminatorGen.java:669)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genFuncCall(JvmTerminatorGen.java:631)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genCall(JvmTerminatorGen.java:614)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genCallTerm(JvmTerminatorGen.java:402)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmTerminatorGen.genTerminator(JvmTerminatorGen.java:252)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmMethodGen.generateBasicBlocks(JvmMethodGen.java:2095)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmMethodGen.genJMethodForBFunc(JvmMethodGen.java:1808)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmMethodGen.generateMethod(JvmMethodGen.java:1658)
at org.wso2.ballerinalang.compiler.bir.codegen.JvmPackageGen.lambda$generateModuleClasses$0(JvmPackageGen.java:550)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1699)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
```
**Steps to reproduce:**
Unzip [opportunities.zip](https://github.com/ballerina-platform/ballerina-lang/files/4994680/opportunities.zip) and build the BAL file using the following command.
`ballerina build opportunities.bal`
**Affected Versions:**
Ballerina Swan Lake Preview 2
**OS, DB, other environment details and versions:**
Ubuntu 18.04, JDK 1.8
**Related Issues (optional):**
N/A
**Suggested Labels (optional):**
N/A
**Suggested Assignees (optional):**
N/A
| priority | bad sad error thrown when building a single bal file description i got a bad sad error when i try to compile the attached opportunities bal file the log is shown below severe log crash null java lang nullpointerexception at sun reflect nativeconstructoraccessorimpl native method at sun reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java lang reflect constructor newinstance constructor java at java util concurrent forkjointask getthrowableexception forkjointask java at java util concurrent forkjointask reportexception forkjointask java at java util concurrent forkjointask invoke forkjointask java at java util stream foreachops foreachop evaluateparallel foreachops java at java util stream foreachops foreachop ofref evaluateparallel foreachops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline foreach referencepipeline java at java util stream referencepipeline head foreach referencepipeline java at org ballerinalang compiler bir codegen jvmpackagegen generatemoduleclasses jvmpackagegen java at org ballerinalang compiler bir codegen jvmpackagegen generate jvmpackagegen java at org ballerinalang compiler bir codegen codegenerator generate codegenerator java at org ballerinalang compiler bir codegen codegenerator generate codegenerator java at org ballerinalang compiler compilerdriver codegen compilerdriver java at org ballerinalang compiler compilerdriver compile compilerdriver java at org ballerinalang compiler compilerdriver compilepackagesymbol compilerdriver java at org ballerinalang compiler compilerdriver compilepackage compilerdriver java at org ballerinalang compiler compiler compilepackages compiler java at org ballerinalang compiler compiler compilepackage compiler java at org ballerinalang compiler compiler compile compiler java at org ballerinalang compiler compiler build compiler java at org ballerinalang packerina task compiletask execute compiletask java at org ballerinalang packerina taskexecutor executetasks taskexecutor java at org ballerinalang packerina cmd buildcommand execute buildcommand java at java util optional ifpresent optional java at org ballerinalang tool main main main java caused by java lang nullpointerexception at org ballerinalang compiler bir codegen jvmterminatorgen genstaticcall jvmterminatorgen java at org ballerinalang compiler bir codegen jvmterminatorgen genfunccall jvmterminatorgen java at org ballerinalang compiler bir codegen jvmterminatorgen gencall jvmterminatorgen java at org ballerinalang compiler bir codegen jvmterminatorgen gencallterm jvmterminatorgen java at org ballerinalang compiler bir codegen jvmterminatorgen genterminator jvmterminatorgen java at org ballerinalang compiler bir codegen jvmmethodgen generatebasicblocks jvmmethodgen java at org ballerinalang compiler bir codegen jvmmethodgen genjmethodforbfunc jvmmethodgen java at org ballerinalang compiler bir codegen jvmmethodgen generatemethod jvmmethodgen java at org ballerinalang compiler bir codegen jvmpackagegen lambda generatemoduleclasses jvmpackagegen java at java util stream foreachops foreachop ofref accept foreachops java at java util hashmap entryspliterator foreachremaining hashmap java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream foreachops foreachtask compute foreachops java at java util concurrent countedcompleter exec countedcompleter java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue runtask forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java steps to reproduce unzip and build the bal file using the following command ballerina build opportunities bal affected versions ballerina swan lake preview os db other environment details and versions ubuntu jdk related issues optional n a suggested labels optional n a suggested assignees optional n a | 1 |
333,365 | 10,120,985,800 | IssuesEvent | 2019-07-31 14:46:07 | IgniteUI/igniteui-angular | https://api.github.com/repos/IgniteUI/igniteui-angular | closed | Filtering Error. You provided an invalid object where a stream was expected. You can provide an Observable, Promise, Array, or Iterable. | bug chip duplicate filter-ui filtering priority: high version: 8.0.x | Hello.
I use angular 8.0.x and igniteui-angular 8.0.x.
If i use default filter mode, i get error:
TypeError: You provided an invalid object where a stream was expected. You can provide an Observable, Promise, Array, or Iterable.
at subscribeTo (subscribeTo.js:27)
at subscribeToResult (subscribeToResult.js:11)
at TakeUntilOperator.call (takeUntil.js:12)
at AnonymousSubject.subscribe (Observable.js:23)
at igniteui-angular.js:27059
at DefaultIterableDiffer.forEachAddedItem (core.js:23952)
at IgxChipsAreaComponent.ngDoCheck (igniteui-angular.js:27058)
at checkAndUpdateDirectiveInline (core.js:27779)
at checkAndUpdateNodeInline (core.js:38466)
at checkAndUpdateNode (core.js:38405)
but if i use filter mode = 'excelStyleFilter' error doesn't appear.
Thanks in advanced | 1.0 | Filtering Error. You provided an invalid object where a stream was expected. You can provide an Observable, Promise, Array, or Iterable. - Hello.
I use angular 8.0.x and igniteui-angular 8.0.x.
If i use default filter mode, i get error:
TypeError: You provided an invalid object where a stream was expected. You can provide an Observable, Promise, Array, or Iterable.
at subscribeTo (subscribeTo.js:27)
at subscribeToResult (subscribeToResult.js:11)
at TakeUntilOperator.call (takeUntil.js:12)
at AnonymousSubject.subscribe (Observable.js:23)
at igniteui-angular.js:27059
at DefaultIterableDiffer.forEachAddedItem (core.js:23952)
at IgxChipsAreaComponent.ngDoCheck (igniteui-angular.js:27058)
at checkAndUpdateDirectiveInline (core.js:27779)
at checkAndUpdateNodeInline (core.js:38466)
at checkAndUpdateNode (core.js:38405)
but if i use filter mode = 'excelStyleFilter' error doesn't appear.
Thanks in advanced | priority | filtering error you provided an invalid object where a stream was expected you can provide an observable promise array or iterable hello i use angular x and igniteui angular x if i use default filter mode i get error typeerror you provided an invalid object where a stream was expected you can provide an observable promise array or iterable at subscribeto subscribeto js at subscribetoresult subscribetoresult js at takeuntiloperator call takeuntil js at anonymoussubject subscribe observable js at igniteui angular js at defaultiterablediffer foreachaddeditem core js at igxchipsareacomponent ngdocheck igniteui angular js at checkandupdatedirectiveinline core js at checkandupdatenodeinline core js at checkandupdatenode core js but if i use filter mode excelstylefilter error doesn t appear thanks in advanced | 1 |
645,045 | 20,993,043,621 | IssuesEvent | 2022-03-29 11:06:11 | aave/interface | https://api.github.com/repos/aave/interface | closed | Governance diferential % | bug priority:high | Looks like Gov differentials don't show correct %
Take a look to see if it makes sense to show differential votes instead | 1.0 | Governance diferential % - Looks like Gov differentials don't show correct %
Take a look to see if it makes sense to show differential votes instead | priority | governance diferential looks like gov differentials don t show correct take a look to see if it makes sense to show differential votes instead | 1 |
807,310 | 29,994,623,713 | IssuesEvent | 2023-06-26 03:49:17 | longhorn/longhorn | https://api.github.com/repos/longhorn/longhorn | closed | [FEATURE] Support SPDK Data Engine - Preview | kind/enhancement Epic highlight priority/0 area/spdk experimental | ## Is your feature request related to a problem? Please describe (👍 if you like this request)
Support SPDK data engine with fundamental functions for volume lifecycle and include below resilience capabilities.
- Volume lifecycle (create, delete, attach, detach)
- Replica rebuilding
- Degrade volume
- Snapshot
## Describe the solution you'd like
This is an EPIC issue, so there will be some dependent issues referenced here.
## Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
## Additional context
cc @longhorn/dev-data-plane
| 1.0 | [FEATURE] Support SPDK Data Engine - Preview - ## Is your feature request related to a problem? Please describe (👍 if you like this request)
Support SPDK data engine with fundamental functions for volume lifecycle and include below resilience capabilities.
- Volume lifecycle (create, delete, attach, detach)
- Replica rebuilding
- Degrade volume
- Snapshot
## Describe the solution you'd like
This is an EPIC issue, so there will be some dependent issues referenced here.
## Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
## Additional context
cc @longhorn/dev-data-plane
| priority | support spdk data engine preview is your feature request related to a problem please describe 👍 if you like this request support spdk data engine with fundamental functions for volume lifecycle and include below resilience capabilities volume lifecycle create delete attach detach replica rebuilding degrade volume snapshot describe the solution you d like this is an epic issue so there will be some dependent issues referenced here describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context cc longhorn dev data plane | 1 |
619,193 | 19,518,711,942 | IssuesEvent | 2021-12-29 14:39:33 | bounswe/2021SpringGroup3 | https://api.github.com/repos/bounswe/2021SpringGroup3 | closed | Backend: Activity Stream Implementation | Type: Feature Status: Completed Priority: High Component: Backend | [The necessary activities](https://www.w3.org/TR/activitystreams-vocabulary/#activity-types) should be stored in the Activity Stream format.
Example:
{
"@context": "https://www.w3.org/ns/activitystreams",
"summary": "Sally accepted Joe into the club",
"type": "Accept",
"actor": {
"type": "Person",
"name": "Sally"
},
"object": {
"type": "Person",
"name": "Joe"
},
"target": {
"type": "Group",
"name": "The Club"
}
} | 1.0 | Backend: Activity Stream Implementation - [The necessary activities](https://www.w3.org/TR/activitystreams-vocabulary/#activity-types) should be stored in the Activity Stream format.
Example:
{
"@context": "https://www.w3.org/ns/activitystreams",
"summary": "Sally accepted Joe into the club",
"type": "Accept",
"actor": {
"type": "Person",
"name": "Sally"
},
"object": {
"type": "Person",
"name": "Joe"
},
"target": {
"type": "Group",
"name": "The Club"
}
} | priority | backend activity stream implementation should be stored in the activity stream format example context summary sally accepted joe into the club type accept actor type person name sally object type person name joe target type group name the club | 1 |
545,781 | 15,963,476,631 | IssuesEvent | 2021-04-16 04:01:17 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | delete button for part not working | Bug Priority-High | I am trying to delete a part that got scanned to the wrong specimen by mistake. I tried clicking on the delete button for the part, and nothing happens. Tried on both Chrome and Firefox.
Specimen https://arctos.database.museum/guid/MVZ:Bird:193200
Container MVZ254316
Part 29748019 | 1.0 | delete button for part not working - I am trying to delete a part that got scanned to the wrong specimen by mistake. I tried clicking on the delete button for the part, and nothing happens. Tried on both Chrome and Firefox.
Specimen https://arctos.database.museum/guid/MVZ:Bird:193200
Container MVZ254316
Part 29748019 | priority | delete button for part not working i am trying to delete a part that got scanned to the wrong specimen by mistake i tried clicking on the delete button for the part and nothing happens tried on both chrome and firefox specimen container part | 1 |
372,286 | 11,012,254,990 | IssuesEvent | 2019-12-04 17:52:22 | zephyrproject-rtos/infrastructure | https://api.github.com/repos/zephyrproject-rtos/infrastructure | opened | Zephyr Website: Resources --> Presentations | Blocker area: Website priority: high | First it hides the title "Presentations" so that you can't tell where you went. This is a problem with other navigation.
Second: Sync with Mae on what should go in the Presentations page - Gold Deck is old etc.
Have 4-5 from last year and add links to videos etc.
Long term sync with board on what needs to be highlighted.
| 1.0 | Zephyr Website: Resources --> Presentations - First it hides the title "Presentations" so that you can't tell where you went. This is a problem with other navigation.
Second: Sync with Mae on what should go in the Presentations page - Gold Deck is old etc.
Have 4-5 from last year and add links to videos etc.
Long term sync with board on what needs to be highlighted.
| priority | zephyr website resources presentations first it hides the title presentations so that you can t tell where you went this is a problem with other navigation second sync with mae on what should go in the presentations page gold deck is old etc have from last year and add links to videos etc long term sync with board on what needs to be highlighted | 1 |
666,359 | 22,351,761,565 | IssuesEvent | 2022-06-15 12:41:06 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | PANIC: Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int") in query-engine/core/src/interpreter/interpreter.rs:68:26 | bug/1-unconfirmed kind/bug topic: mysql team/client topic: basic error report topic: referentialIntegrity priority/high size/m | Hi Prisma Team! My Prisma Client just crashed. This is the report:
## Versions
| Name | Version |
|-----------------|--------------------|
| Node | v16.14.2 |
| OS | debian-openssl-1.1.x|
| Prisma Client | 3.11.1 |
| Query Engine | 0.1.0 |
| Database | mysql |
I have crashes when trying to do 4 queries in series,
Most of the errors have this, which says that an **id** format **uuid** type **string** from one table. is being used to convert it for another table that has **id** type **Int** without any sense, as seen on **ConversionFailure**
I checked and rechecked this for some days now, the ID is created by one query, but based on the error, same ID is being used on another query without coding it at all, without specifying it anywhere, the other query is about updating a simple string field on a table without any variable!!
```
PANIC: Expected record selection to contain required model ID fields.: ConversionFailure("\"c756c9c1-6f9b-46fe-83cb-445c530435d5\"", "Int") in query-engine/core/src/interpreter/interpreter.rs:68:26
```
## Logs
<details>
<summary>
Click to see full log with RUST_BACKTRACE=1
</summary>
```
[1] thread 'tokio-runtime-worker' panicked at 'Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int")', query-engine/core/src/interpreter/interpreter.rs:68:26
[1] stack backtrace:
[1] 0: rust_begin_unwind
[1] at /rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/std/src/panicking.rs:498:5
[1] 1: core::panicking::panic_fmt
[1] at /rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/panicking.rs:116:14
[1] 2: core::result::unwrap_failed
[1] at /rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/result.rs:1690:5
[1] 3: query_core::interpreter::interpreter::ExpressionResult::as_selection_results
[1] 4: core::ops::function::FnOnce::call_once{{vtable.shim}}
[1] 5: query_core::interpreter::interpreter::QueryInterpreter::interpret
[1] 6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 17: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 20: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 21: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 23: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 24: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 25: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 26: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 27: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 28: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 29: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 30: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 31: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 32: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 33: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 34: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 35: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 36: tokio::runtime::task::harness::poll_future
[1] 37: tokio::runtime::task::harness::Harness<T,S>::poll
[1] 38: std::thread::local::LocalKey<T>::with
[1] 39: tokio::runtime::thread_pool::worker::Context::run_task
[1] 40: tokio::runtime::thread_pool::worker::Context::run
[1] 41: tokio::macros::scoped_tls::ScopedKey<T>::set
42: tokio::runtime::thread_pool::worker::run
[1] 43: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
[1] 44: tokio::runtime::task::harness::Harness<T,S>::poll
[1] 45: tokio::runtime::blocking::pool::Inner::run
[1] note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[1] Gabim ne server [register business] PrismaClientUnknownRequestError:
[1] Invalid `prisma.user.update()` invocation:
[1]
[1]
[1] Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int")
[1] at Object.request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:39822:15)
[1] at PrismaClient._request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:40649:18)
[1] at Object.callRouteAction (/home/dev/projects/se/node_modules/@remix-run/server-runtime/data.js:40:14)
[1] at handleDataRequest (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:94:18)
[1] at requestHandler (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:34:18)
[1] at /home/dev/projects/se/node_modules/@remix-run/express/server.js:41:22 {
[1] clientVersion: '3.11.1'
[1] }
[1] POST /regjistro-biznes?_data=routes%2Fuser%2Fregister-business 400 - - 359.268 ms
[1] PrismaClientRustPanicError:
[1] Invalid `prisma.user.findFirst()` invocation:
[1]
[1]
[1] PANIC: Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int") in query-engine/core/src/interpreter/interpreter.rs:68:26
[1]
[1] This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
[1]
[1] https://github.com/prisma/prisma/issues/new?body=Hi+Prisma+Team%21+My+Prisma+Client+just+crashed.+This+is+the+report%3A%0A%23%23+Versions%0A%0A%7C+Name++++++++++++%7C+Version++++++++++++%7C%0A%7C-----------------%7C--------------------%7C%0A%7C+Node++++++++++++%7C+v16.14.2+++++++++++%7C+%0A%7C+OS++++++++++++++%7C+debian-openssl-1.1.x%7C%0A%7C+Prisma+Client+++%7C+3.11.1+++++++++++++%7C%0A%7C+Query+Engine++++%7C+0.1.0++++++++++++++%7C%0A%7C+Database++++++++%7C+mysql++++++++++++++%7C%0A%0A%0A%0A%23%23+Logs%0A%60%60%60%0Aprisma%3AtryLoadEnv+Environment+variables+loaded+from+%2Fhome%2Fdev%2Fprojects%2Fse%2F.env%0Aprisma%3AtryLoadEnv+Environment+variables+loaded+from+%2Fhome%2Fdev%2Fprojects%2Fse%2F.env%0Aprisma%3Aclient+dirname++%2Fhome%2Fdev%2Fprojects%2Fse%2Fnode_modules%2F.prisma%2Fclient%0Aprisma%3Aclient+relativePath++..%2F..%2F..%2Fprisma%0Aprisma%3Aclient+cwd++%2Fhome%2Fdev%2Fprojects%2Fse%2Fprisma%0Aprisma%3Aclient+clientVersion%3A+3.11.1%0Aprisma%3Aclient+clientEngineType%3A+library%0Aprisma%3Aclient%3AlibraryEngine+internalSetup%0Aprisma%3Aclient%3AlibraryEngine+Searching+for+Query+Engine+Library+in+%2Fhome%2Fdev%2Fprojects%2Fse%2Fnode_modules%2F.prisma%2Fclient%0Aprisma%3Aclient%3AlibraryEngine+loadEngine+using+%2Fhome%2Fdev%2Fprojects%2Fse%2Fnode_modules%2F.prisma%2Fclient%2Flibquery_engine-debian-openssl-1.1.x.so.node%0Aprisma%3Aclient%3AlibraryEngine+library+starting%0Aprisma%3Aclient%3AlibraryEngine+library+started%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0A%60%60%60%0A%0A%23%23+Client+Snippet%0A%60%60%60ts%0A%2F%2F+PLEASE+FILL+YOUR+CODE+SNIPPET+HERE%0A%60%60%60%0A%0A%23%23+Schema%0A%60%60%60prisma%0A%2F%2F+PLEASE+ADD+YOUR+SCHEMA+HERE+IF+POSSIBLE%0A%60%60%60%0A%0A%23%23+Prisma+Engine+Query%0A%60%60%60%0A%7B%22X%22%3A%7B%7D%7D%0A%60%60%60%0A&title=PANIC%3A+Expected+record+selection+to+contain+required+model+ID+fields.%3A+ConversionFailure%28%22%5C%2251062868-8142-47da-9ca3-3d90954579e2%5C%22%22%2C+%22Int%22%29+in+query-engine%2Fcore%2Fsrc%2Finterpreter%2Finterpreter.rs%3A68%3A26&template=bug_report.md
[1]
[1] If you want the Prisma team to look into it, please open the link above 🙏
[1] To increase the chance of success, please post your schema and a snippet of
[1] how you used Prisma Client in the issue.
[1]
[1] at Object.request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:39826:15)
[1] at PrismaClient._request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:40649:18)
[1] at isAuthenticated (/home/dev/projects/se/app/utils/session.server.ts:272:19)
[1] at Object.callRouteLoader (/home/dev/projects/se/node_modules/@remix-run/server-runtime/data.js:77:14)
[1] at handleDataRequest (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:113:18)
[1] at requestHandler (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:34:18)
[1] at /home/dev/projects/se/node_modules/@remix-run/express/server.js:41:22 {
[1] clientVersion: '3.11.1'
[1] }
```
</details>
## Thoughts
This kind of bug I have seen it on other issues of prisma repo. From today there are **1900 issues** and I believe this repo is not being maintained enough and is being commercialized too much. I am considering switching to another db client after coding for 2 months with Prisma.
| 1.0 | PANIC: Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int") in query-engine/core/src/interpreter/interpreter.rs:68:26 - Hi Prisma Team! My Prisma Client just crashed. This is the report:
## Versions
| Name | Version |
|-----------------|--------------------|
| Node | v16.14.2 |
| OS | debian-openssl-1.1.x|
| Prisma Client | 3.11.1 |
| Query Engine | 0.1.0 |
| Database | mysql |
I have crashes when trying to do 4 queries in series,
Most of the errors have this, which says that an **id** format **uuid** type **string** from one table. is being used to convert it for another table that has **id** type **Int** without any sense, as seen on **ConversionFailure**
I checked and rechecked this for some days now, the ID is created by one query, but based on the error, same ID is being used on another query without coding it at all, without specifying it anywhere, the other query is about updating a simple string field on a table without any variable!!
```
PANIC: Expected record selection to contain required model ID fields.: ConversionFailure("\"c756c9c1-6f9b-46fe-83cb-445c530435d5\"", "Int") in query-engine/core/src/interpreter/interpreter.rs:68:26
```
## Logs
<details>
<summary>
Click to see full log with RUST_BACKTRACE=1
</summary>
```
[1] thread 'tokio-runtime-worker' panicked at 'Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int")', query-engine/core/src/interpreter/interpreter.rs:68:26
[1] stack backtrace:
[1] 0: rust_begin_unwind
[1] at /rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/std/src/panicking.rs:498:5
[1] 1: core::panicking::panic_fmt
[1] at /rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/panicking.rs:116:14
[1] 2: core::result::unwrap_failed
[1] at /rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/result.rs:1690:5
[1] 3: query_core::interpreter::interpreter::ExpressionResult::as_selection_results
[1] 4: core::ops::function::FnOnce::call_once{{vtable.shim}}
[1] 5: query_core::interpreter::interpreter::QueryInterpreter::interpret
[1] 6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 17: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 20: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 21: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 23: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 24: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 25: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 26: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 27: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 28: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 29: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 30: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 31: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 32: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 33: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 34: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 35: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
[1] 36: tokio::runtime::task::harness::poll_future
[1] 37: tokio::runtime::task::harness::Harness<T,S>::poll
[1] 38: std::thread::local::LocalKey<T>::with
[1] 39: tokio::runtime::thread_pool::worker::Context::run_task
[1] 40: tokio::runtime::thread_pool::worker::Context::run
[1] 41: tokio::macros::scoped_tls::ScopedKey<T>::set
42: tokio::runtime::thread_pool::worker::run
[1] 43: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
[1] 44: tokio::runtime::task::harness::Harness<T,S>::poll
[1] 45: tokio::runtime::blocking::pool::Inner::run
[1] note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[1] Gabim ne server [register business] PrismaClientUnknownRequestError:
[1] Invalid `prisma.user.update()` invocation:
[1]
[1]
[1] Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int")
[1] at Object.request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:39822:15)
[1] at PrismaClient._request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:40649:18)
[1] at Object.callRouteAction (/home/dev/projects/se/node_modules/@remix-run/server-runtime/data.js:40:14)
[1] at handleDataRequest (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:94:18)
[1] at requestHandler (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:34:18)
[1] at /home/dev/projects/se/node_modules/@remix-run/express/server.js:41:22 {
[1] clientVersion: '3.11.1'
[1] }
[1] POST /regjistro-biznes?_data=routes%2Fuser%2Fregister-business 400 - - 359.268 ms
[1] PrismaClientRustPanicError:
[1] Invalid `prisma.user.findFirst()` invocation:
[1]
[1]
[1] PANIC: Expected record selection to contain required model ID fields.: ConversionFailure("\"51062868-8142-47da-9ca3-3d90954579e2\"", "Int") in query-engine/core/src/interpreter/interpreter.rs:68:26
[1]
[1] This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
[1]
[1] https://github.com/prisma/prisma/issues/new?body=Hi+Prisma+Team%21+My+Prisma+Client+just+crashed.+This+is+the+report%3A%0A%23%23+Versions%0A%0A%7C+Name++++++++++++%7C+Version++++++++++++%7C%0A%7C-----------------%7C--------------------%7C%0A%7C+Node++++++++++++%7C+v16.14.2+++++++++++%7C+%0A%7C+OS++++++++++++++%7C+debian-openssl-1.1.x%7C%0A%7C+Prisma+Client+++%7C+3.11.1+++++++++++++%7C%0A%7C+Query+Engine++++%7C+0.1.0++++++++++++++%7C%0A%7C+Database++++++++%7C+mysql++++++++++++++%7C%0A%0A%0A%0A%23%23+Logs%0A%60%60%60%0Aprisma%3AtryLoadEnv+Environment+variables+loaded+from+%2Fhome%2Fdev%2Fprojects%2Fse%2F.env%0Aprisma%3AtryLoadEnv+Environment+variables+loaded+from+%2Fhome%2Fdev%2Fprojects%2Fse%2F.env%0Aprisma%3Aclient+dirname++%2Fhome%2Fdev%2Fprojects%2Fse%2Fnode_modules%2F.prisma%2Fclient%0Aprisma%3Aclient+relativePath++..%2F..%2F..%2Fprisma%0Aprisma%3Aclient+cwd++%2Fhome%2Fdev%2Fprojects%2Fse%2Fprisma%0Aprisma%3Aclient+clientVersion%3A+3.11.1%0Aprisma%3Aclient+clientEngineType%3A+library%0Aprisma%3Aclient%3AlibraryEngine+internalSetup%0Aprisma%3Aclient%3AlibraryEngine+Searching+for+Query+Engine+Library+in+%2Fhome%2Fdev%2Fprojects%2Fse%2Fnode_modules%2F.prisma%2Fclient%0Aprisma%3Aclient%3AlibraryEngine+loadEngine+using+%2Fhome%2Fdev%2Fprojects%2Fse%2Fnode_modules%2F.prisma%2Fclient%2Flibquery_engine-debian-openssl-1.1.x.so.node%0Aprisma%3Aclient%3AlibraryEngine+library+starting%0Aprisma%3Aclient%3AlibraryEngine+library+started%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0Aprisma%3Aclient%3AlibraryEngine+sending+request%2C+this.libraryStarted%3A+true%0A%60%60%60%0A%0A%23%23+Client+Snippet%0A%60%60%60ts%0A%2F%2F+PLEASE+FILL+YOUR+CODE+SNIPPET+HERE%0A%60%60%60%0A%0A%23%23+Schema%0A%60%60%60prisma%0A%2F%2F+PLEASE+ADD+YOUR+SCHEMA+HERE+IF+POSSIBLE%0A%60%60%60%0A%0A%23%23+Prisma+Engine+Query%0A%60%60%60%0A%7B%22X%22%3A%7B%7D%7D%0A%60%60%60%0A&title=PANIC%3A+Expected+record+selection+to+contain+required+model+ID+fields.%3A+ConversionFailure%28%22%5C%2251062868-8142-47da-9ca3-3d90954579e2%5C%22%22%2C+%22Int%22%29+in+query-engine%2Fcore%2Fsrc%2Finterpreter%2Finterpreter.rs%3A68%3A26&template=bug_report.md
[1]
[1] If you want the Prisma team to look into it, please open the link above 🙏
[1] To increase the chance of success, please post your schema and a snippet of
[1] how you used Prisma Client in the issue.
[1]
[1] at Object.request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:39826:15)
[1] at PrismaClient._request (/home/dev/projects/se/node_modules/@prisma/client/runtime/index.js:40649:18)
[1] at isAuthenticated (/home/dev/projects/se/app/utils/session.server.ts:272:19)
[1] at Object.callRouteLoader (/home/dev/projects/se/node_modules/@remix-run/server-runtime/data.js:77:14)
[1] at handleDataRequest (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:113:18)
[1] at requestHandler (/home/dev/projects/se/node_modules/@remix-run/server-runtime/server.js:34:18)
[1] at /home/dev/projects/se/node_modules/@remix-run/express/server.js:41:22 {
[1] clientVersion: '3.11.1'
[1] }
```
</details>
## Thoughts
This kind of bug I have seen it on other issues of prisma repo. From today there are **1900 issues** and I believe this repo is not being maintained enough and is being commercialized too much. I am considering switching to another db client after coding for 2 months with Prisma.
| priority | panic expected record selection to contain required model id fields conversionfailure int in query engine core src interpreter interpreter rs hi prisma team my prisma client just crashed this is the report versions name version node os debian openssl x prisma client query engine database mysql i have crashes when trying to do queries in series most of the errors have this which says that an id format uuid type string from one table is being used to convert it for another table that has id type int without any sense as seen on conversionfailure i checked and rechecked this for some days now the id is created by one query but based on the error same id is being used on another query without coding it at all without specifying it anywhere the other query is about updating a simple string field on a table without any variable panic expected record selection to contain required model id fields conversionfailure int in query engine core src interpreter interpreter rs logs click to see full log with rust backtrace thread tokio runtime worker panicked at expected record selection to contain required model id fields conversionfailure int query engine core src interpreter interpreter rs stack backtrace rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core result unwrap failed at rustc library core src result rs query core interpreter interpreter expressionresult as selection results core ops function fnonce call once vtable shim query core interpreter interpreter queryinterpreter interpret as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll tokio runtime task harness poll future tokio runtime task harness harness poll std thread local localkey with tokio runtime thread pool worker context run task tokio runtime thread pool worker context run tokio macros scoped tls scopedkey set tokio runtime thread pool worker run as core future future future poll tokio runtime task harness harness poll tokio runtime blocking pool inner run note some details are omitted run with rust backtrace full for a verbose backtrace gabim ne server prismaclientunknownrequesterror invalid prisma user update invocation expected record selection to contain required model id fields conversionfailure int at object request home dev projects se node modules prisma client runtime index js at prismaclient request home dev projects se node modules prisma client runtime index js at object callrouteaction home dev projects se node modules remix run server runtime data js at handledatarequest home dev projects se node modules remix run server runtime server js at requesthandler home dev projects se node modules remix run server runtime server js at home dev projects se node modules remix run express server js clientversion post regjistro biznes data routes business ms prismaclientrustpanicerror invalid prisma user findfirst invocation panic expected record selection to contain required model id fields conversionfailure int in query engine core src interpreter interpreter rs this is a non recoverable error which probably happens when the prisma query engine has a panic if you want the prisma team to look into it please open the link above 🙏 to increase the chance of success please post your schema and a snippet of how you used prisma client in the issue at object request home dev projects se node modules prisma client runtime index js at prismaclient request home dev projects se node modules prisma client runtime index js at isauthenticated home dev projects se app utils session server ts at object callrouteloader home dev projects se node modules remix run server runtime data js at handledatarequest home dev projects se node modules remix run server runtime server js at requesthandler home dev projects se node modules remix run server runtime server js at home dev projects se node modules remix run express server js clientversion thoughts this kind of bug i have seen it on other issues of prisma repo from today there are issues and i believe this repo is not being maintained enough and is being commercialized too much i am considering switching to another db client after coding for months with prisma | 1 |
86,890 | 3,734,766,095 | IssuesEvent | 2016-03-08 08:59:30 | YeOldeDM/wangu | https://api.github.com/repos/YeOldeDM/wangu | opened | Structures stacking on game restore | bug high priority | I thought I had squashed this bug a while ago, but it's creeped up again.
Something is still telling the construction script to build all buildings on init, rather than only the ones in the restore file. Should be easy to fix, I'm just too lazy to dig into it now. | 1.0 | Structures stacking on game restore - I thought I had squashed this bug a while ago, but it's creeped up again.
Something is still telling the construction script to build all buildings on init, rather than only the ones in the restore file. Should be easy to fix, I'm just too lazy to dig into it now. | priority | structures stacking on game restore i thought i had squashed this bug a while ago but it s creeped up again something is still telling the construction script to build all buildings on init rather than only the ones in the restore file should be easy to fix i m just too lazy to dig into it now | 1 |
714,140 | 24,552,375,051 | IssuesEvent | 2022-10-12 13:32:34 | zowe/zowe-cli | https://api.github.com/repos/zowe/zowe-cli | closed | No error code returned for invalid credentials | bug priority-high | Hello, it seems when the CLI throws error _Must have user & password OR base64 encoded credentials_, there is no error code returned. So we must check the `error.message` to tell if this error occurred.

| 1.0 | No error code returned for invalid credentials - Hello, it seems when the CLI throws error _Must have user & password OR base64 encoded credentials_, there is no error code returned. So we must check the `error.message` to tell if this error occurred.

| priority | no error code returned for invalid credentials hello it seems when the cli throws error must have user password or encoded credentials there is no error code returned so we must check the error message to tell if this error occurred | 1 |
591,852 | 17,863,763,142 | IssuesEvent | 2021-09-06 06:47:31 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | The word "positive" appears on "Install Ubuntu" tutorial | Priority: High | ## Summary
https://ubuntu.com/tutorials/install-ubuntu-desktop#8-select-your-location has an erroneous "positive" word below the image.
## Process
View the page https://ubuntu.com/tutorials/install-ubuntu-desktop#8-select-your-location
## Browser details
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Firefox/91.0 (Firefox 91) - also present in Chrome. | 1.0 | The word "positive" appears on "Install Ubuntu" tutorial - ## Summary
https://ubuntu.com/tutorials/install-ubuntu-desktop#8-select-your-location has an erroneous "positive" word below the image.
## Process
View the page https://ubuntu.com/tutorials/install-ubuntu-desktop#8-select-your-location
## Browser details
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Firefox/91.0 (Firefox 91) - also present in Chrome. | priority | the word positive appears on install ubuntu tutorial summary has an erroneous positive word below the image process view the page browser details mozilla macintosh intel mac os x rv gecko firefox firefox also present in chrome | 1 |
131,120 | 5,143,463,009 | IssuesEvent | 2017-01-12 16:03:37 | rwth-afu/RustPager | https://api.github.com/repos/rwth-afu/RustPager | closed | Das letzte Zeichen einer Aussendung fehlt | Priority: High Type: Bug | Hallo zusammen,
mir ist aufgefallen das bei Aussendungen das letzte Zeichen fehlt.
INFO - Transmission completed.
INFO - Received Message { id: 0, mtype: AlphaNum, speed: Baud(1200), addr: 49872, func: AlphaNum, data: "Hallo Robert, dies ist ein Test 123123123" }
Anbei noch das Foto von der empfangenen Aussendung

| 1.0 | Das letzte Zeichen einer Aussendung fehlt - Hallo zusammen,
mir ist aufgefallen das bei Aussendungen das letzte Zeichen fehlt.
INFO - Transmission completed.
INFO - Received Message { id: 0, mtype: AlphaNum, speed: Baud(1200), addr: 49872, func: AlphaNum, data: "Hallo Robert, dies ist ein Test 123123123" }
Anbei noch das Foto von der empfangenen Aussendung

| priority | das letzte zeichen einer aussendung fehlt hallo zusammen mir ist aufgefallen das bei aussendungen das letzte zeichen fehlt info transmission completed info received message id mtype alphanum speed baud addr func alphanum data hallo robert dies ist ein test anbei noch das foto von der empfangenen aussendung | 1 |
33,313 | 2,763,845,516 | IssuesEvent | 2015-04-29 12:23:10 | handsontable/handsontable | https://api.github.com/repos/handsontable/handsontable | closed | Handsontable not reading width/height/overflow defined in the external CSS properly | Bug Priority: high Released | When Handsontable is put inside a container, which has the `overflow`, `height` and `width` properties defined in an external stylesheet, it doesn't work properly (too much cells are being rendered + scrolling issues) | 1.0 | Handsontable not reading width/height/overflow defined in the external CSS properly - When Handsontable is put inside a container, which has the `overflow`, `height` and `width` properties defined in an external stylesheet, it doesn't work properly (too much cells are being rendered + scrolling issues) | priority | handsontable not reading width height overflow defined in the external css properly when handsontable is put inside a container which has the overflow height and width properties defined in an external stylesheet it doesn t work properly too much cells are being rendered scrolling issues | 1 |
626,651 | 19,830,642,216 | IssuesEvent | 2022-01-20 11:36:08 | GoldenSoftwareLtd/gedemin | https://api.github.com/repos/GoldenSoftwareLtd/gedemin | closed | Разбивать документ рецептурного журнала на 2 позиции | Type-Enhancement Priority-High Meat | Originally reported on Google Code with ID 2295
```
Использовать уже созданный функционал берёзы + добавить форму параметров в которой:
Отображается 2 позиции комбобокса...в обеих наименование исходного продукта (c возможностью
выбора другого, список для лукапа из рецептур) Для каждой позиции которого добавить
параметры
- выпуск продукции
- закладка сырья
(сделать перерасчёт для каждого параметра т.е. если введён выход то расчёт закладки
или если введена закладка то расчёт выхода)
- отметка парное ли сырьё
Контролировать что бы сумма 2-х параметров закладки или выхода, была равна сумме этого
же поля в исходном документе
После выбора параметров, делить документ на 2 в соответствии с установленными параметрами
```
Reported by `stasgm` on 2010-12-30 15:22:25
| 1.0 | Разбивать документ рецептурного журнала на 2 позиции - Originally reported on Google Code with ID 2295
```
Использовать уже созданный функционал берёзы + добавить форму параметров в которой:
Отображается 2 позиции комбобокса...в обеих наименование исходного продукта (c возможностью
выбора другого, список для лукапа из рецептур) Для каждой позиции которого добавить
параметры
- выпуск продукции
- закладка сырья
(сделать перерасчёт для каждого параметра т.е. если введён выход то расчёт закладки
или если введена закладка то расчёт выхода)
- отметка парное ли сырьё
Контролировать что бы сумма 2-х параметров закладки или выхода, была равна сумме этого
же поля в исходном документе
После выбора параметров, делить документ на 2 в соответствии с установленными параметрами
```
Reported by `stasgm` on 2010-12-30 15:22:25
| priority | разбивать документ рецептурного журнала на позиции originally reported on google code with id использовать уже созданный функционал берёзы добавить форму параметров в которой отображается позиции комбобокса в обеих наименование исходного продукта c возможностью выбора другого список для лукапа из рецептур для каждой позиции которого добавить параметры выпуск продукции закладка сырья сделать перерасчёт для каждого параметра т е если введён выход то расчёт закладки или если введена закладка то расчёт выхода отметка парное ли сырьё контролировать что бы сумма х параметров закладки или выхода была равна сумме этого же поля в исходном документе после выбора параметров делить документ на в соответствии с установленными параметрами reported by stasgm on | 1 |
743,959 | 25,921,428,304 | IssuesEvent | 2022-12-15 22:24:31 | DSpace/dspace-angular | https://api.github.com/repos/DSpace/dspace-angular | closed | Use full community and collection names in breadcrumb navigation | improvement help wanted usability high priority good first issue Estimate TBD | **Is your feature request related to a problem? Please describe.**
In DSpace 7, long community and collection names are clipped in breadcrumb navigation. (The item title is also included in the breadcrumb navigation, which I think is unnecessary. But if the item title must be included, then do clip item titles, since they can be very long.)
<img width="1236" alt="DSpace7_breadcrumb" src="https://user-images.githubusercontent.com/13037168/186742802-210e067c-3a4b-4e0f-aef2-eb6bde8db35a.png">
Like many institutions, we have some long-winded community and collection names to match long-winded organizational units. Many start with our organization's name, so if they are clipped, they will be hard to distinguish. Therefore, please do not clip the community and collection names in the breadcrumb navigation.
**Describe the solution you'd like**
In DSpace 6 XMLU, full community and collection names are used in the breadcrumb navigation. The item title is not included, which I think is preferable. This breadcrumb navigation has worked well for us.
<img width="1160" alt="DSpace6_XMLUI_breadcrumb" src="https://user-images.githubusercontent.com/13037168/186746458-1ea5b29a-1f2a-42e5-8b22-64d0d42be3bb.png">
**Describe alternatives or workarounds you've considered**
One option (discussed below) would be to continue to truncate these names, but provide a (hover over) tooltip which displays the full name.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | Use full community and collection names in breadcrumb navigation - **Is your feature request related to a problem? Please describe.**
In DSpace 7, long community and collection names are clipped in breadcrumb navigation. (The item title is also included in the breadcrumb navigation, which I think is unnecessary. But if the item title must be included, then do clip item titles, since they can be very long.)
<img width="1236" alt="DSpace7_breadcrumb" src="https://user-images.githubusercontent.com/13037168/186742802-210e067c-3a4b-4e0f-aef2-eb6bde8db35a.png">
Like many institutions, we have some long-winded community and collection names to match long-winded organizational units. Many start with our organization's name, so if they are clipped, they will be hard to distinguish. Therefore, please do not clip the community and collection names in the breadcrumb navigation.
**Describe the solution you'd like**
In DSpace 6 XMLU, full community and collection names are used in the breadcrumb navigation. The item title is not included, which I think is preferable. This breadcrumb navigation has worked well for us.
<img width="1160" alt="DSpace6_XMLUI_breadcrumb" src="https://user-images.githubusercontent.com/13037168/186746458-1ea5b29a-1f2a-42e5-8b22-64d0d42be3bb.png">
**Describe alternatives or workarounds you've considered**
One option (discussed below) would be to continue to truncate these names, but provide a (hover over) tooltip which displays the full name.
**Additional context**
Add any other context or screenshots about the feature request here.
| priority | use full community and collection names in breadcrumb navigation is your feature request related to a problem please describe in dspace long community and collection names are clipped in breadcrumb navigation the item title is also included in the breadcrumb navigation which i think is unnecessary but if the item title must be included then do clip item titles since they can be very long img width alt breadcrumb src like many institutions we have some long winded community and collection names to match long winded organizational units many start with our organization s name so if they are clipped they will be hard to distinguish therefore please do not clip the community and collection names in the breadcrumb navigation describe the solution you d like in dspace xmlu full community and collection names are used in the breadcrumb navigation the item title is not included which i think is preferable this breadcrumb navigation has worked well for us img width alt xmlui breadcrumb src describe alternatives or workarounds you ve considered one option discussed below would be to continue to truncate these names but provide a hover over tooltip which displays the full name additional context add any other context or screenshots about the feature request here | 1 |
751,860 | 26,261,672,695 | IssuesEvent | 2023-01-06 08:31:29 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | Inside Building/city lag | Core Status: Need Info Priority: High | [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:**
While inside buildings/near structures, sever FPS drop and lag occurs.
**How to reproduce:**
**How it should work:**
No lag - same as it is in the outside world.
**Database links:**
| 1.0 | Inside Building/city lag - [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:**
While inside buildings/near structures, sever FPS drop and lag occurs.
**How to reproduce:**
**How it should work:**
No lag - same as it is in the outside world.
**Database links:**
| priority | inside building city lag rembember add links to things related to the bug using for example cata twinhead twinstar cz description while inside buildings near structures sever fps drop and lag occurs how to reproduce how it should work no lag same as it is in the outside world database links | 1 |
451,370 | 13,034,241,826 | IssuesEvent | 2020-07-28 08:23:36 | bryntum/support | https://api.github.com/repos/bryntum/support | opened | Resource images not seen in Gantt + Scheduler demo | bug high-priority regression | <img width="482" alt="Screenshot 2020-07-28 at 10 17 20" src="https://user-images.githubusercontent.com/218570/88638867-5b280580-d0bc-11ea-955b-5b754a2a933c.png">
| 1.0 | Resource images not seen in Gantt + Scheduler demo - <img width="482" alt="Screenshot 2020-07-28 at 10 17 20" src="https://user-images.githubusercontent.com/218570/88638867-5b280580-d0bc-11ea-955b-5b754a2a933c.png">
| priority | resource images not seen in gantt scheduler demo img width alt screenshot at src | 1 |
731,515 | 25,219,838,693 | IssuesEvent | 2022-11-14 11:57:44 | randombar164/ouchi_bar | https://api.github.com/repos/randombar164/ouchi_bar | closed | 作れるカクテル検索で具体的な材料に限定しない(同一カテゴリ内の材料ならなんでもいい)場合の処理の実装。 | Priority High backend | 作れるカクテル検索で具体的な材料に限定しない(同一カテゴリ内の材料ならなんでもいい)場合の処理は後で実装する。
_Originally posted by @H0R15H0 in https://github.com/randombar164/ouchi_bar/issues/123#issuecomment-1170828806_ | 1.0 | 作れるカクテル検索で具体的な材料に限定しない(同一カテゴリ内の材料ならなんでもいい)場合の処理の実装。 - 作れるカクテル検索で具体的な材料に限定しない(同一カテゴリ内の材料ならなんでもいい)場合の処理は後で実装する。
_Originally posted by @H0R15H0 in https://github.com/randombar164/ouchi_bar/issues/123#issuecomment-1170828806_ | priority | 作れるカクテル検索で具体的な材料に限定しない(同一カテゴリ内の材料ならなんでもいい)場合の処理の実装。 作れるカクテル検索で具体的な材料に限定しない(同一カテゴリ内の材料ならなんでもいい)場合の処理は後で実装する。 originally posted by in | 1 |
287,628 | 8,817,714,665 | IssuesEvent | 2018-12-31 04:18:02 | Codewars/codewars.com | https://api.github.com/repos/Codewars/codewars.com | closed | Code deleted after refreshing page in browser | high priority kind/bug | I recently did the kata "Square into Squares. Protect trees!" and confirmed the correctness of my code by pressing "Submit". I decided to make a couple of small changes and then pressed "Submit Final", but was greeted by the response "Unknown Error" several times in a row. I decided to refresh the page to see if that might solve the problem, and found that after the page had reloaded, all my code had disappeared and been replaced with the default starting code for that kata.
The browser I was using was Firefox, although I recall that something similar has happened to me when completing a kata with Google Chrome.
Addendum: I did the kata again and hit the same problem with "Unknown Error" after pressing "Submit Final". This time, luckily, I made sure to copy my code to the system clipboard before reloading the page. Once again, all my code had been replaced with the default starting code.
| 1.0 | Code deleted after refreshing page in browser - I recently did the kata "Square into Squares. Protect trees!" and confirmed the correctness of my code by pressing "Submit". I decided to make a couple of small changes and then pressed "Submit Final", but was greeted by the response "Unknown Error" several times in a row. I decided to refresh the page to see if that might solve the problem, and found that after the page had reloaded, all my code had disappeared and been replaced with the default starting code for that kata.
The browser I was using was Firefox, although I recall that something similar has happened to me when completing a kata with Google Chrome.
Addendum: I did the kata again and hit the same problem with "Unknown Error" after pressing "Submit Final". This time, luckily, I made sure to copy my code to the system clipboard before reloading the page. Once again, all my code had been replaced with the default starting code.
| priority | code deleted after refreshing page in browser i recently did the kata square into squares protect trees and confirmed the correctness of my code by pressing submit i decided to make a couple of small changes and then pressed submit final but was greeted by the response unknown error several times in a row i decided to refresh the page to see if that might solve the problem and found that after the page had reloaded all my code had disappeared and been replaced with the default starting code for that kata the browser i was using was firefox although i recall that something similar has happened to me when completing a kata with google chrome addendum i did the kata again and hit the same problem with unknown error after pressing submit final this time luckily i made sure to copy my code to the system clipboard before reloading the page once again all my code had been replaced with the default starting code | 1 |
248,076 | 7,927,117,465 | IssuesEvent | 2018-07-06 06:39:44 | mono/monodevelop | https://api.github.com/repos/mono/monodevelop | closed | FileWatcherService resource starvation | Area: Shell Performance high-priority vs-sync | Use a smarter model to manage how many threads are alive at once.
We create one file watcher per directory root, with the rule that if a path is a child of another path, remove the former from the watch list.
While this optimizes for some cases, there are problems with other scenarios:
a) out of tree file watchers
b) sibling directories
corefx FileSystemWatcher creates [one thread per watcher](https://github.com/dotnet/corefx/issues/30600), so we can end up starving resources if we use FileWatcherService in a bad way.
Work is being done on a separate lib that will handle this scenario regardless of the underlying FileSystemWatcher implementation: https://github.com/Therzok/PathTree
PathTree is a sorted tree, that mirrors the paths we want to watch in a tree. The logic behind the tree is that we can easily request a max of 8 root nodes by doing bfs traversal with a queue, keeping track of how many nodes are added and comparing the current queue length and the popped node's children count with 8.
Work on integrating that code into MonoDevelop
> VS bug [#638612](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/638612) | 1.0 | FileWatcherService resource starvation - Use a smarter model to manage how many threads are alive at once.
We create one file watcher per directory root, with the rule that if a path is a child of another path, remove the former from the watch list.
While this optimizes for some cases, there are problems with other scenarios:
a) out of tree file watchers
b) sibling directories
corefx FileSystemWatcher creates [one thread per watcher](https://github.com/dotnet/corefx/issues/30600), so we can end up starving resources if we use FileWatcherService in a bad way.
Work is being done on a separate lib that will handle this scenario regardless of the underlying FileSystemWatcher implementation: https://github.com/Therzok/PathTree
PathTree is a sorted tree, that mirrors the paths we want to watch in a tree. The logic behind the tree is that we can easily request a max of 8 root nodes by doing bfs traversal with a queue, keeping track of how many nodes are added and comparing the current queue length and the popped node's children count with 8.
Work on integrating that code into MonoDevelop
> VS bug [#638612](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/638612) | priority | filewatcherservice resource starvation use a smarter model to manage how many threads are alive at once we create one file watcher per directory root with the rule that if a path is a child of another path remove the former from the watch list while this optimizes for some cases there are problems with other scenarios a out of tree file watchers b sibling directories corefx filesystemwatcher creates so we can end up starving resources if we use filewatcherservice in a bad way work is being done on a separate lib that will handle this scenario regardless of the underlying filesystemwatcher implementation pathtree is a sorted tree that mirrors the paths we want to watch in a tree the logic behind the tree is that we can easily request a max of root nodes by doing bfs traversal with a queue keeping track of how many nodes are added and comparing the current queue length and the popped node s children count with work on integrating that code into monodevelop vs bug | 1 |
577,835 | 17,136,553,076 | IssuesEvent | 2021-07-13 03:28:14 | zulip/zulip-mobile | https://api.github.com/repos/zulip/zulip-mobile | closed | Search view should open at the bottom | P1 high-priority a-message list help wanted | At present, the search view opens at the top, i.e. showing the oldest fetched message matching the query. However, it is most likely that the user is looking for a recent message.
We should therefore open the search view at the bottom (most recent messages). | 1.0 | Search view should open at the bottom - At present, the search view opens at the top, i.e. showing the oldest fetched message matching the query. However, it is most likely that the user is looking for a recent message.
We should therefore open the search view at the bottom (most recent messages). | priority | search view should open at the bottom at present the search view opens at the top i e showing the oldest fetched message matching the query however it is most likely that the user is looking for a recent message we should therefore open the search view at the bottom most recent messages | 1 |
138,690 | 5,345,533,245 | IssuesEvent | 2017-02-17 17:11:21 | Metaswitch/ralf | https://api.github.com/repos/Metaswitch/ralf | closed | We don't add the Service-Context-Id AVP to ACRs | bug cat:easy critical deferred high-priority | <!--
This page is for reporting issues with Project Clearwater. If you have a question, rather than a bug report, the mailing list at clearwater@lists.projectclearwater.org is a better place for it.
To give us the best chance of fixing the problem, we've suggested some information to give - please follow these guidelines if possible.
Don't forget that you can attach logs and screenshots to Github issues - this may help us debug a problem.
-->
#### Symptoms
It seems that Ralf generates ACRs without the Service-Context-Id AVP, which it looks like it should contain - see TS32.299. I think it should be always set to `MNC.MCC.10.32260@3gpp.org` (whilst we support release 10 of the specs).
#### Impact
Resulting CDRs don't have any spec context!
#### Steps to reproduce
First step is to actually verify that this bug exists, which should be fairly straightforward! | 1.0 | We don't add the Service-Context-Id AVP to ACRs - <!--
This page is for reporting issues with Project Clearwater. If you have a question, rather than a bug report, the mailing list at clearwater@lists.projectclearwater.org is a better place for it.
To give us the best chance of fixing the problem, we've suggested some information to give - please follow these guidelines if possible.
Don't forget that you can attach logs and screenshots to Github issues - this may help us debug a problem.
-->
#### Symptoms
It seems that Ralf generates ACRs without the Service-Context-Id AVP, which it looks like it should contain - see TS32.299. I think it should be always set to `MNC.MCC.10.32260@3gpp.org` (whilst we support release 10 of the specs).
#### Impact
Resulting CDRs don't have any spec context!
#### Steps to reproduce
First step is to actually verify that this bug exists, which should be fairly straightforward! | priority | we don t add the service context id avp to acrs this page is for reporting issues with project clearwater if you have a question rather than a bug report the mailing list at clearwater lists projectclearwater org is a better place for it to give us the best chance of fixing the problem we ve suggested some information to give please follow these guidelines if possible don t forget that you can attach logs and screenshots to github issues this may help us debug a problem symptoms it seems that ralf generates acrs without the service context id avp which it looks like it should contain see i think it should be always set to mnc mcc org whilst we support release of the specs impact resulting cdrs don t have any spec context steps to reproduce first step is to actually verify that this bug exists which should be fairly straightforward | 1 |
510,663 | 14,814,009,657 | IssuesEvent | 2021-01-14 03:35:24 | juntofoundation/junto-mobile | https://api.github.com/repos/juntofoundation/junto-mobile | closed | Mentions search | High Priority | Let's have the results from the API return users that have the characters in either their username or name when mentioning. Right now, we're only account for usernames. | 1.0 | Mentions search - Let's have the results from the API return users that have the characters in either their username or name when mentioning. Right now, we're only account for usernames. | priority | mentions search let s have the results from the api return users that have the characters in either their username or name when mentioning right now we re only account for usernames | 1 |
713,781 | 24,539,439,639 | IssuesEvent | 2022-10-12 01:22:26 | AY2223S1-CS2113-F11-1/tp | https://api.github.com/repos/AY2223S1-CS2113-F11-1/tp | closed | As a Property Manager, I can delete properties | type.Story priority.High | ... so that I can stop tracking properties that i'm not managing | 1.0 | As a Property Manager, I can delete properties - ... so that I can stop tracking properties that i'm not managing | priority | as a property manager i can delete properties so that i can stop tracking properties that i m not managing | 1 |
517,384 | 15,008,360,404 | IssuesEvent | 2021-01-31 09:44:22 | bounswe/bounswe2020group9 | https://api.github.com/repos/bounswe/bounswe2020group9 | closed | iOS - Customer/ Add Review | Estimation - Medium Mobile Priority - High Status - Completed | Implement "adding reviews" feature.
As discussed in the meeting, customer shall be able to add review only for the products they have already purchased. Therefore, it should be carried out on the Orders Page.
**Deadline: 25.01.2021** | 1.0 | iOS - Customer/ Add Review - Implement "adding reviews" feature.
As discussed in the meeting, customer shall be able to add review only for the products they have already purchased. Therefore, it should be carried out on the Orders Page.
**Deadline: 25.01.2021** | priority | ios customer add review implement adding reviews feature as discussed in the meeting customer shall be able to add review only for the products they have already purchased therefore it should be carried out on the orders page deadline | 1 |
781,322 | 27,432,721,772 | IssuesEvent | 2023-03-02 03:34:56 | expertiza/expertiza | https://api.github.com/repos/expertiza/expertiza | closed | Extra checkbox column in student version of signup sheet | bug High Priority | The student version of the signup sheet (main/master branch) has a "Select" column populated with checkboxes. Ostensibly, the checkboxes only belong in the instructor version of the signup sheet.
<img width="566" alt="Screen Shot 2023-02-18 at 1 48 05 PM" src="https://user-images.githubusercontent.com/23277855/219883090-1f8c9b15-dba9-441d-84d8-c0fb9862b047.png">
| 1.0 | Extra checkbox column in student version of signup sheet - The student version of the signup sheet (main/master branch) has a "Select" column populated with checkboxes. Ostensibly, the checkboxes only belong in the instructor version of the signup sheet.
<img width="566" alt="Screen Shot 2023-02-18 at 1 48 05 PM" src="https://user-images.githubusercontent.com/23277855/219883090-1f8c9b15-dba9-441d-84d8-c0fb9862b047.png">
| priority | extra checkbox column in student version of signup sheet the student version of the signup sheet main master branch has a select column populated with checkboxes ostensibly the checkboxes only belong in the instructor version of the signup sheet img width alt screen shot at pm src | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.