Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
88,104 | 3,773,110,643 | IssuesEvent | 2016-03-17 00:02:42 | nilsschmidt1337/ldparteditor | https://api.github.com/repos/nilsschmidt1337/ldparteditor | closed | Is there a way to move the Manipolator icons to a toolbar? | enhancement high-priority | 
Well... its a mess IMHO :-1:
| 1.0 | Is there a way to move the Manipolator icons to a toolbar? - 
Well... its a mess IMHO :-1:
| priority | is there a way to move the manipolator icons to a toolbar well its a mess imho | 1 |
391,980 | 11,581,509,168 | IssuesEvent | 2020-02-21 22:51:35 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [engine] Support passing filters as variables in graphql queries | CI enhancement priority: high | **Is your feature request related to a problem? Please describe.**
GraphQL allows a consumer to pass a filter as a parameter in a graphQL query. Crafter does not currently support this.
**Describe the solution you'd like**
Support the ability to pass a graphQL query
Example of passing a filter as a parameter:
```
Query:
query MyQuery($disabled: BooleanFilters) {
page_entry {
items {
disabled(filter: $disabled)
}
}
}
Parameter:
{ "disabled": { "equals": true } }
```
| 1.0 | [engine] Support passing filters as variables in graphql queries - **Is your feature request related to a problem? Please describe.**
GraphQL allows a consumer to pass a filter as a parameter in a graphQL query. Crafter does not currently support this.
**Describe the solution you'd like**
Support the ability to pass a graphQL query
Example of passing a filter as a parameter:
```
Query:
query MyQuery($disabled: BooleanFilters) {
page_entry {
items {
disabled(filter: $disabled)
}
}
}
Parameter:
{ "disabled": { "equals": true } }
```
| priority | support passing filters as variables in graphql queries is your feature request related to a problem please describe graphql allows a consumer to pass a filter as a parameter in a graphql query crafter does not currently support this describe the solution you d like support the ability to pass a graphql query example of passing a filter as a parameter query query myquery disabled booleanfilters page entry items disabled filter disabled parameter disabled equals true | 1 |
654,488 | 21,653,993,932 | IssuesEvent | 2022-05-06 12:24:43 | epiforecasts/EpiNow2 | https://api.github.com/repos/epiforecasts/EpiNow2 | closed | Review use of futile.logger and error catching in package | bug help wanted high priority | The current implementation of `futile.logger`/error catching in `EpiNow2` makes it somewhat difficult to find errors or debug with R's normal tools (as I understand them) as it stands and could do with another dev pass.
This is due both to the complexity of the package and the evolution of `EpiNow2` without updating the supporting in package logging. This is blocking us from rapidly iterating in `covid-rt-estimates` and makes development of `EpiNow2` somewhat frustrating due to a lack of familiarity with the logging package/appropriate and robust error catching.
Support is needed to design a logging system that is user friendly, that catches errors where they happen to ease debugging, that does not saturate the code with too many logging updates, and that presents summaries to users appropriately when runnng large runs in a useful format without overwhelming them with log calls. Ideally this should be primarily be implemented into EpiNow2 rather than upstream so that all users can benefit. | 1.0 | Review use of futile.logger and error catching in package - The current implementation of `futile.logger`/error catching in `EpiNow2` makes it somewhat difficult to find errors or debug with R's normal tools (as I understand them) as it stands and could do with another dev pass.
This is due both to the complexity of the package and the evolution of `EpiNow2` without updating the supporting in package logging. This is blocking us from rapidly iterating in `covid-rt-estimates` and makes development of `EpiNow2` somewhat frustrating due to a lack of familiarity with the logging package/appropriate and robust error catching.
Support is needed to design a logging system that is user friendly, that catches errors where they happen to ease debugging, that does not saturate the code with too many logging updates, and that presents summaries to users appropriately when runnng large runs in a useful format without overwhelming them with log calls. Ideally this should be primarily be implemented into EpiNow2 rather than upstream so that all users can benefit. | priority | review use of futile logger and error catching in package the current implementation of futile logger error catching in makes it somewhat difficult to find errors or debug with r s normal tools as i understand them as it stands and could do with another dev pass this is due both to the complexity of the package and the evolution of without updating the supporting in package logging this is blocking us from rapidly iterating in covid rt estimates and makes development of somewhat frustrating due to a lack of familiarity with the logging package appropriate and robust error catching support is needed to design a logging system that is user friendly that catches errors where they happen to ease debugging that does not saturate the code with too many logging updates and that presents summaries to users appropriately when runnng large runs in a useful format without overwhelming them with log calls ideally this should be primarily be implemented into rather than upstream so that all users can benefit | 1 |
632,083 | 20,171,261,536 | IssuesEvent | 2022-02-10 10:37:21 | ballerina-platform/ballerina-update-tool | https://api.github.com/repos/ballerina-platform/ballerina-update-tool | closed | Current version (not published) is not showing as installed in `bal dist list` | Type/Bug Priority/High | **Description:**
When we are having different dsitribution versions (e.g: 2201.0.1-SNAPSHOT), the `bal dist list` command is not showing the installed version as it has been installed. Reason is that the current version has not been published yet.
<img width="834" alt="Screenshot 2022-02-10 at 14 57 56" src="https://user-images.githubusercontent.com/51471998/153378906-3fa0db98-f7fe-43fd-8ea7-26d5658370d8.png">
**Steps to reproduce:**
Download the installer (ubuntu deb/rpm, macos pkg, windows msi) from the daily build in ballerina-distribution or build the ballerina-distribution and generate installer from the pack.
Install ballerina from the installer and run the `bal dist list`.
**Affected Versions:**
All the versions
**OS, DB, other environment details and versions:**
All the Operating systems.
| 1.0 | Current version (not published) is not showing as installed in `bal dist list` - **Description:**
When we are having different dsitribution versions (e.g: 2201.0.1-SNAPSHOT), the `bal dist list` command is not showing the installed version as it has been installed. Reason is that the current version has not been published yet.
<img width="834" alt="Screenshot 2022-02-10 at 14 57 56" src="https://user-images.githubusercontent.com/51471998/153378906-3fa0db98-f7fe-43fd-8ea7-26d5658370d8.png">
**Steps to reproduce:**
Download the installer (ubuntu deb/rpm, macos pkg, windows msi) from the daily build in ballerina-distribution or build the ballerina-distribution and generate installer from the pack.
Install ballerina from the installer and run the `bal dist list`.
**Affected Versions:**
All the versions
**OS, DB, other environment details and versions:**
All the Operating systems.
| priority | current version not published is not showing as installed in bal dist list description when we are having different dsitribution versions e g snapshot the bal dist list command is not showing the installed version as it has been installed reason is that the current version has not been published yet img width alt screenshot at src steps to reproduce download the installer ubuntu deb rpm macos pkg windows msi from the daily build in ballerina distribution or build the ballerina distribution and generate installer from the pack install ballerina from the installer and run the bal dist list affected versions all the versions os db other environment details and versions all the operating systems | 1 |
580,421 | 17,243,677,397 | IssuesEvent | 2021-07-21 04:50:29 | rizinorg/rizin | https://api.github.com/repos/rizinorg/rizin | opened | Fix PE: corkami 65535sects.exe - section list, entrypoint, open and analyze with ASAN build | PE RzAnalysis high-priority optimization | It is unnecessary slow. Could be and should be optimized. It fails on our CI
```
[XX] db/formats/pe/65535sects PE: corkami 65535sects.exe - section list, entrypoint, open and analyze
RZ_NOPLUGINS=1 rizin -escr.utf8=0 -escr.color=0 -escr.interactive=0 -N -Qc 'aa
om~?
s
pi 1
q!
' bins/pe/65535sects.exe
-- stdout
--- expected
+++ actual
@@ -1,3 +1,0 @@
-65536
-0x291120
-mov edi, 0x7027aff9
-- stderr
[ ] Analyze all flags starting with sym. and entry0 (aa)
[
[x] Analyze all flags starting with sym. and entry0 (aa)
-- exit status: -1
``` | 1.0 | Fix PE: corkami 65535sects.exe - section list, entrypoint, open and analyze with ASAN build - It is unnecessary slow. Could be and should be optimized. It fails on our CI
```
[XX] db/formats/pe/65535sects PE: corkami 65535sects.exe - section list, entrypoint, open and analyze
RZ_NOPLUGINS=1 rizin -escr.utf8=0 -escr.color=0 -escr.interactive=0 -N -Qc 'aa
om~?
s
pi 1
q!
' bins/pe/65535sects.exe
-- stdout
--- expected
+++ actual
@@ -1,3 +1,0 @@
-65536
-0x291120
-mov edi, 0x7027aff9
-- stderr
[ ] Analyze all flags starting with sym. and entry0 (aa)
[
[x] Analyze all flags starting with sym. and entry0 (aa)
-- exit status: -1
``` | priority | fix pe corkami exe section list entrypoint open and analyze with asan build it is unnecessary slow could be and should be optimized it fails on our ci db formats pe pe corkami exe section list entrypoint open and analyze rz noplugins rizin escr escr color escr interactive n qc aa om s pi q bins pe exe stdout expected actual mov edi stderr analyze all flags starting with sym and aa analyze all flags starting with sym and aa exit status | 1 |
650,621 | 21,411,200,362 | IssuesEvent | 2022-04-22 06:13:08 | lifestream-friendship-network/feedback | https://api.github.com/repos/lifestream-friendship-network/feedback | closed | Paginated Friends, blocks, invite queries | high priority next deployment | All of these were placeholder "return everything" situations, and that's no good once it scales up a bit. | 1.0 | Paginated Friends, blocks, invite queries - All of these were placeholder "return everything" situations, and that's no good once it scales up a bit. | priority | paginated friends blocks invite queries all of these were placeholder return everything situations and that s no good once it scales up a bit | 1 |
556,066 | 16,473,573,652 | IssuesEvent | 2021-05-23 22:16:38 | meanmedianmoge/zoia_lib | https://api.github.com/repos/meanmedianmoge/zoia_lib | opened | No Patch Names or Details | bug high priority | **Describe the bug**
There's no info about any patches
**To Reproduce**
Steps to reproduce the behavior:
1. download the app and run it on ubuntu 20.04
**Expected behavior**
Expect to see some patch names, from the screenshots in the manual this is what I expect
**Screenshots**

**Desktop:**
Ubuntu 20.04
| 1.0 | No Patch Names or Details - **Describe the bug**
There's no info about any patches
**To Reproduce**
Steps to reproduce the behavior:
1. download the app and run it on ubuntu 20.04
**Expected behavior**
Expect to see some patch names, from the screenshots in the manual this is what I expect
**Screenshots**

**Desktop:**
Ubuntu 20.04
| priority | no patch names or details describe the bug there s no info about any patches to reproduce steps to reproduce the behavior download the app and run it on ubuntu expected behavior expect to see some patch names from the screenshots in the manual this is what i expect screenshots desktop ubuntu | 1 |
545,448 | 15,950,809,054 | IssuesEvent | 2021-04-15 09:05:37 | sopra-fs21-group-4/client | https://api.github.com/repos/sopra-fs21-group-4/client | opened | submit meme title request and implementation in the backend | high priority task | submit meme title request and implementation in the backend
Time estimate 8h
User story: #9 | 1.0 | submit meme title request and implementation in the backend - submit meme title request and implementation in the backend
Time estimate 8h
User story: #9 | priority | submit meme title request and implementation in the backend submit meme title request and implementation in the backend time estimate user story | 1 |
804,197 | 29,478,738,953 | IssuesEvent | 2023-06-02 02:16:34 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | Make sure when Electron crashes there is a crash report | OS: mac browser: electron type: user experience stage: backlog priority: high stale | To debug issues like #1434 we should make sure there is Electron crash report saved (see https://electronjs.org/docs/api/crash-reporter)
- [ ] how to reliably crash Cypress on Mac? Really need segmentation fault like this one
```
Node[1] 29936 segmentation fault ./node_modules/cypress/dist/Cypress.app/Contents/MacOS/Cypress
```
- [ ] does it write message about the crash into `/var/log/system.log`?
- [ ] make sure the report file is saved
- [ ] put the report and instructions to the user in the error message shown from the CLI | 1.0 | Make sure when Electron crashes there is a crash report - To debug issues like #1434 we should make sure there is Electron crash report saved (see https://electronjs.org/docs/api/crash-reporter)
- [ ] how to reliably crash Cypress on Mac? Really need segmentation fault like this one
```
Node[1] 29936 segmentation fault ./node_modules/cypress/dist/Cypress.app/Contents/MacOS/Cypress
```
- [ ] does it write message about the crash into `/var/log/system.log`?
- [ ] make sure the report file is saved
- [ ] put the report and instructions to the user in the error message shown from the CLI | priority | make sure when electron crashes there is a crash report to debug issues like we should make sure there is electron crash report saved see how to reliably crash cypress on mac really need segmentation fault like this one node segmentation fault node modules cypress dist cypress app contents macos cypress does it write message about the crash into var log system log make sure the report file is saved put the report and instructions to the user in the error message shown from the cli | 1 |
656,242 | 21,724,213,399 | IssuesEvent | 2022-05-11 05:41:11 | jordan-sullivan/flashcards-2.5 | https://api.github.com/repos/jordan-sullivan/flashcards-2.5 | opened | create takeTurn Method | high priority | takeTurn: method that updates turns count, evaluates guesses, gives feedback, and stores ids of incorrect guesses
When a guess is made, a new Turn instance is created.
The turns count is updated, regardless of whether the guess is correct or incorrect
The next card becomes current card
Guess is evaluated/recorded. Incorrect guesses will be stored (via the id) in an array of incorrectGuesses
Feedback is returned regarding whether the guess is incorrect or correct | 1.0 | create takeTurn Method - takeTurn: method that updates turns count, evaluates guesses, gives feedback, and stores ids of incorrect guesses
When a guess is made, a new Turn instance is created.
The turns count is updated, regardless of whether the guess is correct or incorrect
The next card becomes current card
Guess is evaluated/recorded. Incorrect guesses will be stored (via the id) in an array of incorrectGuesses
Feedback is returned regarding whether the guess is incorrect or correct | priority | create taketurn method taketurn method that updates turns count evaluates guesses gives feedback and stores ids of incorrect guesses when a guess is made a new turn instance is created the turns count is updated regardless of whether the guess is correct or incorrect the next card becomes current card guess is evaluated recorded incorrect guesses will be stored via the id in an array of incorrectguesses feedback is returned regarding whether the guess is incorrect or correct | 1 |
21,603 | 2,641,718,336 | IssuesEvent | 2015-03-11 19:20:16 | chrsmith/html5rocks | https://api.github.com/repos/chrsmith/html5rocks | closed | Commit 129 | Milestone-2 Priority-High Type-Review | Original [issue 74](https://code.google.com/p/html5rocks/issues/detail?id=74) created by chrsmith on 2010-07-28T00:14:17.000Z:
<b>Link to revision:</b>
http://code.google.com/p/html5rocks/source/detail?r=129
<b>Purpose of code changes:</b>
Adding web workers tutorial
| 1.0 | Commit 129 - Original [issue 74](https://code.google.com/p/html5rocks/issues/detail?id=74) created by chrsmith on 2010-07-28T00:14:17.000Z:
<b>Link to revision:</b>
http://code.google.com/p/html5rocks/source/detail?r=129
<b>Purpose of code changes:</b>
Adding web workers tutorial
| priority | commit original created by chrsmith on link to revision purpose of code changes adding web workers tutorial | 1 |
734,725 | 25,360,597,676 | IssuesEvent | 2022-11-20 21:04:44 | bounswe/bounswe2022group7 | https://api.github.com/repos/bounswe/bounswe2022group7 | closed | [BE] Get User Endpoints Implementation | Status: Completed Priority: High Difficulty: Medium Type: Implementation Target: Backend | We should implement an endpoint that returns user related information in order to provide data of the profile pages to the frontend team.
We are going to have two different endpoints:
1. /profile returns the whole registered user class which should be extracted from the token. Token is required, no additional body or variable required.
2. /profile/{username} returns the account info information of the user corresponding the username. Token is not required, only takes username path variable.
@sabrimete is assigned to this implementation.
**Reviewer:** @askabderon
**Deadline:** 22/11/2022 23.59 | 1.0 | [BE] Get User Endpoints Implementation - We should implement an endpoint that returns user related information in order to provide data of the profile pages to the frontend team.
We are going to have two different endpoints:
1. /profile returns the whole registered user class which should be extracted from the token. Token is required, no additional body or variable required.
2. /profile/{username} returns the account info information of the user corresponding the username. Token is not required, only takes username path variable.
@sabrimete is assigned to this implementation.
**Reviewer:** @askabderon
**Deadline:** 22/11/2022 23.59 | priority | get user endpoints implementation we should implement an endpoint that returns user related information in order to provide data of the profile pages to the frontend team we are going to have two different endpoints profile returns the whole registered user class which should be extracted from the token token is required no additional body or variable required profile username returns the account info information of the user corresponding the username token is not required only takes username path variable sabrimete is assigned to this implementation reviewer askabderon deadline | 1 |
651,304 | 21,473,107,620 | IssuesEvent | 2022-04-26 11:21:14 | mdfbaam/ORNL-Slicer-2-Issue-Tracker | https://api.github.com/repos/mdfbaam/ORNL-Slicer-2-Issue-Tracker | closed | Slicer does not output F values when slicing | bug core-slicing high-priority | ## Expected Behavior
Self-Explanatory.
## Actual Behavior
Normally there would be FXXX right in front of all the "S" commands (like S170). However, I came across this bug when slicing at an angle. Even after restarting the computer and putting the slice angle back to zero it still did not output F values for the normal print moves. You'll notice it DOES output F values for the tip wipe, so that seems to be operating normally. However, it is missing from all the other lines.
## Possible Solution
Self-Explanatory
## Steps to Reproduce the Problem
Self-Explanatory. Ideally, bullet or numbered list.
## Specifications
Platform, machine specs, anything else you think might be relevant.

| 1.0 | Slicer does not output F values when slicing - ## Expected Behavior
Self-Explanatory.
## Actual Behavior
Normally there would be FXXX right in front of all the "S" commands (like S170). However, I came across this bug when slicing at an angle. Even after restarting the computer and putting the slice angle back to zero it still did not output F values for the normal print moves. You'll notice it DOES output F values for the tip wipe, so that seems to be operating normally. However, it is missing from all the other lines.
## Possible Solution
Self-Explanatory
## Steps to Reproduce the Problem
Self-Explanatory. Ideally, bullet or numbered list.
## Specifications
Platform, machine specs, anything else you think might be relevant.

| priority | slicer does not output f values when slicing expected behavior self explanatory actual behavior normally there would be fxxx right in front of all the s commands like however i came across this bug when slicing at an angle even after restarting the computer and putting the slice angle back to zero it still did not output f values for the normal print moves you ll notice it does output f values for the tip wipe so that seems to be operating normally however it is missing from all the other lines possible solution self explanatory steps to reproduce the problem self explanatory ideally bullet or numbered list specifications platform machine specs anything else you think might be relevant | 1 |
547,315 | 16,041,109,201 | IssuesEvent | 2021-04-22 08:00:39 | 46elks/praktik-apiskolan | https://api.github.com/repos/46elks/praktik-apiskolan | closed | Add link to live webpage in "about" section of repository | enhancement good first issue high priority | The future link to the live website should be provided in the "about" section, which also contains a description and topics | 1.0 | Add link to live webpage in "about" section of repository - The future link to the live website should be provided in the "about" section, which also contains a description and topics | priority | add link to live webpage in about section of repository the future link to the live website should be provided in the about section which also contains a description and topics | 1 |
191,441 | 6,828,965,161 | IssuesEvent | 2017-11-08 22:18:10 | crowdAI/crowdai | https://api.github.com/repos/crowdAI/crowdai | closed | Participant qualification - multi round challenge | feature high priority | Reject submissions from participants if they have not qualified for stage 2 (with an appropriate message)
Some view of the qualified stage 2 participants
Re: https://github.com/crowdAI/crowdai/issues/340 | 1.0 | Participant qualification - multi round challenge - Reject submissions from participants if they have not qualified for stage 2 (with an appropriate message)
Some view of the qualified stage 2 participants
Re: https://github.com/crowdAI/crowdai/issues/340 | priority | participant qualification multi round challenge reject submissions from participants if they have not qualified for stage with an appropriate message some view of the qualified stage participants re | 1 |
751,446 | 26,245,424,703 | IssuesEvent | 2023-01-05 14:54:26 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Unread chat badge shown on muted channels on reboot | bug Chat priority 1: high E:Bugfixes S:2 messenger | # Bug Report
## Description
If you have a muted public chat, receive messages it, don't read them, close the app and reopen, the unread badge will be shown on the section icon on the left
See video below for example
## Steps to reproduce
1. Mute a channel and put it as inactive
2. In another account, send messages in that muted channel (this works fine, no badge is shown)
3. Close app no. 1 and reopen
Result: the badge is shown on the left and you don't even know why
#### Expected behavior
No badge is ever shown for a muted channel
### Additional Information
- Status desktop version: 0.9.0RC1
[Screencast 2023-01-03 16:15:45-unread.webm](https://user-images.githubusercontent.com/11926403/210442700-2a3817ab-e903-413d-b066-41937bc4486e.webm)
| 1.0 | Unread chat badge shown on muted channels on reboot - # Bug Report
## Description
If you have a muted public chat, receive messages it, don't read them, close the app and reopen, the unread badge will be shown on the section icon on the left
See video below for example
## Steps to reproduce
1. Mute a channel and put it as inactive
2. In another account, send messages in that muted channel (this works fine, no badge is shown)
3. Close app no. 1 and reopen
Result: the badge is shown on the left and you don't even know why
#### Expected behavior
No badge is ever shown for a muted channel
### Additional Information
- Status desktop version: 0.9.0RC1
[Screencast 2023-01-03 16:15:45-unread.webm](https://user-images.githubusercontent.com/11926403/210442700-2a3817ab-e903-413d-b066-41937bc4486e.webm)
| priority | unread chat badge shown on muted channels on reboot bug report description if you have a muted public chat receive messages it don t read them close the app and reopen the unread badge will be shown on the section icon on the left see video below for example steps to reproduce mute a channel and put it as inactive in another account send messages in that muted channel this works fine no badge is shown close app no and reopen result the badge is shown on the left and you don t even know why expected behavior no badge is ever shown for a muted channel additional information status desktop version | 1 |
125,652 | 4,959,184,694 | IssuesEvent | 2016-12-02 12:29:22 | hpi-swt2/workshop-portal | https://api.github.com/repos/hpi-swt2/workshop-portal | opened | Start page organizer view | High Priority team-helene | **As**
organizer
**I want to**
have a top page menu bar consisting of Start, Veranstaltungen, Anfragen and a drop down under my name with Profilinfo, Mein Profil (former Nutzer in the pupil view), Ausloggen
**in order to**
have smooth navigation
- [ ] | 1.0 | Start page organizer view - **As**
organizer
**I want to**
have a top page menu bar consisting of Start, Veranstaltungen, Anfragen and a drop down under my name with Profilinfo, Mein Profil (former Nutzer in the pupil view), Ausloggen
**in order to**
have smooth navigation
- [ ] | priority | start page organizer view as organizer i want to have a top page menu bar consisting of start veranstaltungen anfragen and a drop down under my name with profilinfo mein profil former nutzer in the pupil view ausloggen in order to have smooth navigation | 1 |
502,993 | 14,576,998,838 | IssuesEvent | 2020-12-18 00:52:45 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | Created labels are not integer | API: labels.py bug priority:HIGH sct_label_vertebrae | ### Description
When creating manual labels, the created labels do not have the value asked for: they are output in float (it whould be integer) and the value is slightly different (precision issue).
User forum: https://forum.spinalcordmri.org/t/sct-register-to-template-source-and-destination-landmarks-are-not-the-same/590/4
### Steps to Reproduce
Data: [t2.nii.gz](https://github.com/neuropoly/spinalcordtoolbox/files/5704767/t2.nii.gz)
~~~
sct_label_utils -i t2.nii.gz -create-viewer 3,9 -ldisc t2_labels_disc_manual.nii.gz
sct_label_utils -i t2_labels_disc_manual.nii.gz -display
~~~
output:
~~~
Position=(214,298,7) -- Value= 2.999999999301508
Position=(199,103,7) -- Value= 8.999999997904524
~~~
Expected output:
~~~
Position=(214,298,7) -- Value= 3
Position=(199,103,7) -- Value= 9
~~~
Interestingly, when doing the same experiment on the `sct_testing_data/t2/t2.nii.gz`, it works as expected.
| 1.0 | Created labels are not integer - ### Description
When creating manual labels, the created labels do not have the value asked for: they are output in float (it whould be integer) and the value is slightly different (precision issue).
User forum: https://forum.spinalcordmri.org/t/sct-register-to-template-source-and-destination-landmarks-are-not-the-same/590/4
### Steps to Reproduce
Data: [t2.nii.gz](https://github.com/neuropoly/spinalcordtoolbox/files/5704767/t2.nii.gz)
~~~
sct_label_utils -i t2.nii.gz -create-viewer 3,9 -ldisc t2_labels_disc_manual.nii.gz
sct_label_utils -i t2_labels_disc_manual.nii.gz -display
~~~
output:
~~~
Position=(214,298,7) -- Value= 2.999999999301508
Position=(199,103,7) -- Value= 8.999999997904524
~~~
Expected output:
~~~
Position=(214,298,7) -- Value= 3
Position=(199,103,7) -- Value= 9
~~~
Interestingly, when doing the same experiment on the `sct_testing_data/t2/t2.nii.gz`, it works as expected.
| priority | created labels are not integer description when creating manual labels the created labels do not have the value asked for they are output in float it whould be integer and the value is slightly different precision issue user forum steps to reproduce data sct label utils i nii gz create viewer ldisc labels disc manual nii gz sct label utils i labels disc manual nii gz display output position value position value expected output position value position value interestingly when doing the same experiment on the sct testing data nii gz it works as expected | 1 |
219,456 | 7,342,566,809 | IssuesEvent | 2018-03-07 08:24:33 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Filter claims from SSO consent approval for OIDC | Priority/Highest Severity/Blocker Type/Improvement | Output user information of ID tokens, userinfo requests should be restricted based on the user consent | 1.0 | Filter claims from SSO consent approval for OIDC - Output user information of ID tokens, userinfo requests should be restricted based on the user consent | priority | filter claims from sso consent approval for oidc output user information of id tokens userinfo requests should be restricted based on the user consent | 1 |
211,280 | 7,199,985,813 | IssuesEvent | 2018-02-05 17:32:39 | DrylandEcology/rSFSW2 | https://api.github.com/repos/DrylandEcology/rSFSW2 | closed | No slot of name "MonthlyProductionValues_grass" | bug high priority in progress | All rSFSW2 simulations are failing due to recent commits to rSOILWAT2's master branch.
```
[1] "Datafile 'sw_input_climscen_values' contains zero rows. 'Label's of the master input file 'SWRunInformation' are used to populate rows and 'Label's of the datafile."
Error in slot(prod_default, paste0("MonthlyProductionValues_", tolower(fg))) :
no slot of name "MonthlyProductionValues_grass" for this object of class "swProd"
```
The rSFSW2 automated builds did not catch this, because the last run on master was before the commits that caused this failure. I restarted rSFSW2's automated build on master and replicated the error, along with the other two pull requests open on rSFSW2.
---------------------
**The issue is very simple: the function `update_biomass` in `Vegetation.R` needs to be updated to match the new slot layout in `swProd`.**
I tried changing it from this:
```
temp <- slot(prod_default, paste0("MonthlyProductionValues_", tolower(fg)))
```
To this:
```
if (fg == "Grass") temp <- rSOILWAT2::swProd_MonProd_grass(prod_default)
else if (fg == "Shrub") temp <- rSOILWAT2::swProd_MonProd_shrub(prod_default)
else if (fg == "Tree") temp <- rSOILWAT2::swProd_MonProd_tree(prod_default)
else if (fg == "Forb") temp <- rSOILWAT2::swProd_MonProd_forb(prod_default)
```
But it was unsuccessful:
```
Error in object@MonthlyVeg[[rSW2_glovars[["kSOILWAT2"]][["VegTypes"]][["SW_TREES"]]]] :
attempt to select less than one element in integerOneIndex
```
So, I am going to continue working on what I was assigned to and am instead assigning @dschlaep because he made the changes to rSOILWAT2. In the meantime, commit `e0fa1acd62c2d17c961e58ff2a1eed982a347937` on rSOILWAT2 does not have this issue. | 1.0 | No slot of name "MonthlyProductionValues_grass" - All rSFSW2 simulations are failing due to recent commits to rSOILWAT2's master branch.
```
[1] "Datafile 'sw_input_climscen_values' contains zero rows. 'Label's of the master input file 'SWRunInformation' are used to populate rows and 'Label's of the datafile."
Error in slot(prod_default, paste0("MonthlyProductionValues_", tolower(fg))) :
no slot of name "MonthlyProductionValues_grass" for this object of class "swProd"
```
The rSFSW2 automated builds did not catch this, because the last run on master was before the commits that caused this failure. I restarted rSFSW2's automated build on master and replicated the error, along with the other two pull requests open on rSFSW2.
---------------------
**The issue is very simple: the function `update_biomass` in `Vegetation.R` needs to be updated to match the new slot layout in `swProd`.**
I tried changing it from this:
```
temp <- slot(prod_default, paste0("MonthlyProductionValues_", tolower(fg)))
```
To this:
```
if (fg == "Grass") temp <- rSOILWAT2::swProd_MonProd_grass(prod_default)
else if (fg == "Shrub") temp <- rSOILWAT2::swProd_MonProd_shrub(prod_default)
else if (fg == "Tree") temp <- rSOILWAT2::swProd_MonProd_tree(prod_default)
else if (fg == "Forb") temp <- rSOILWAT2::swProd_MonProd_forb(prod_default)
```
But it was unsuccessful:
```
Error in object@MonthlyVeg[[rSW2_glovars[["kSOILWAT2"]][["VegTypes"]][["SW_TREES"]]]] :
attempt to select less than one element in integerOneIndex
```
So, I am going to continue working on what I was assigned to and am instead assigning @dschlaep because he made the changes to rSOILWAT2. In the meantime, commit `e0fa1acd62c2d17c961e58ff2a1eed982a347937` on rSOILWAT2 does not have this issue. | priority | no slot of name monthlyproductionvalues grass all simulations are failing due to recent commits to s master branch datafile sw input climscen values contains zero rows label s of the master input file swruninformation are used to populate rows and label s of the datafile error in slot prod default monthlyproductionvalues tolower fg no slot of name monthlyproductionvalues grass for this object of class swprod the automated builds did not catch this because the last run on master was before the commits that caused this failure i restarted s automated build on master and replicated the error along with the other two pull requests open on the issue is very simple the function update biomass in vegetation r needs to be updated to match the new slot layout in swprod i tried changing it from this temp slot prod default monthlyproductionvalues tolower fg to this if fg grass temp swprod monprod grass prod default else if fg shrub temp swprod monprod shrub prod default else if fg tree temp swprod monprod tree prod default else if fg forb temp swprod monprod forb prod default but it was unsuccessful error in object monthlyveg attempt to select less than one element in integeroneindex so i am going to continue working on what i was assigned to and am instead assigning dschlaep because he made the changes to in the meantime commit on does not have this issue | 1 |
699,488 | 24,018,339,351 | IssuesEvent | 2022-09-15 04:32:43 | MathMarEcol/WSMPA2 | https://api.github.com/repos/MathMarEcol/WSMPA2 | closed | Order of features and targets | High Priority | In the `prioritizr::problem` call, ensure that we have a check for the order of features matching the order of targets. Otherwise the incorrect target could be applied.
They should all stay in the correct order but worth checking. | 1.0 | Order of features and targets - In the `prioritizr::problem` call, ensure that we have a check for the order of features matching the order of targets. Otherwise the incorrect target could be applied.
They should all stay in the correct order but worth checking. | priority | order of features and targets in the prioritizr problem call ensure that we have a check for the order of features matching the order of targets otherwise the incorrect target could be applied they should all stay in the correct order but worth checking | 1 |
163,910 | 6,216,692,020 | IssuesEvent | 2017-07-08 06:46:12 | tkh44/emotion | https://api.github.com/repos/tkh44/emotion | closed | Don't pass innerRef to DOM component | beginner friendly bug help wanted high priority | I have a StyledInput and use `innerRef` to get the input element. But React will display this annoying warning.
`Warning: Unknown prop `innerRef` on <input> tag. Remove this prop from the element. For details, see https://fb.me/react-unknown-prop`
We should remove the innerRef after pass it to ref. Or better, filter unknown props like styled-component, but this comes with a cost of bigger runtime | 1.0 | Don't pass innerRef to DOM component - I have a StyledInput and use `innerRef` to get the input element. But React will display this annoying warning.
`Warning: Unknown prop `innerRef` on <input> tag. Remove this prop from the element. For details, see https://fb.me/react-unknown-prop`
We should remove the innerRef after pass it to ref. Or better, filter unknown props like styled-component, but this comes with a cost of bigger runtime | priority | don t pass innerref to dom component i have a styledinput and use innerref to get the input element but react will display this annoying warning warning unknown prop innerref on tag remove this prop from the element for details see we should remove the innerref after pass it to ref or better filter unknown props like styled component but this comes with a cost of bigger runtime | 1 |
188,189 | 6,773,825,638 | IssuesEvent | 2017-10-27 08:00:54 | vincentrk/quadrodoodle | https://api.github.com/repos/vincentrk/quadrodoodle | closed | Stop sending JS and Throttle messages during calibration | high priority | Currently these messages are being sent but should not be.
Look into why/where they are being sent from and stop them for calibration mode | 1.0 | Stop sending JS and Throttle messages during calibration - Currently these messages are being sent but should not be.
Look into why/where they are being sent from and stop them for calibration mode | priority | stop sending js and throttle messages during calibration currently these messages are being sent but should not be look into why where they are being sent from and stop them for calibration mode | 1 |
440,058 | 12,692,411,987 | IssuesEvent | 2020-06-21 22:21:23 | cds-snc/covid-shield-mobile | https://api.github.com/repos/cds-snc/covid-shield-mobile | closed | Turn on Bluetooth button doesn't work | bluetooth high priority upstream | If bluetooth is turned off globally, the button should take you to the correct screen to turn it on. | 1.0 | Turn on Bluetooth button doesn't work - If bluetooth is turned off globally, the button should take you to the correct screen to turn it on. | priority | turn on bluetooth button doesn t work if bluetooth is turned off globally the button should take you to the correct screen to turn it on | 1 |
227,895 | 7,543,956,738 | IssuesEvent | 2018-04-17 16:56:53 | GingerWalnut/SQ5.0Public | https://api.github.com/repos/GingerWalnut/SQ5.0Public | closed | Arenstad Server Transfer Glitch | Priority High Ships Bug | When I flew out of Arenstad, I spawned too close to Arenstad and we're stuck outside of the planet as server jump is failing. Additionally, it says that we are encountering an obstacle about 5000 blocks away from where we are so... | 1.0 | Arenstad Server Transfer Glitch - When I flew out of Arenstad, I spawned too close to Arenstad and we're stuck outside of the planet as server jump is failing. Additionally, it says that we are encountering an obstacle about 5000 blocks away from where we are so... | priority | arenstad server transfer glitch when i flew out of arenstad i spawned too close to arenstad and we re stuck outside of the planet as server jump is failing additionally it says that we are encountering an obstacle about blocks away from where we are so | 1 |
118,808 | 4,756,261,887 | IssuesEvent | 2016-10-24 13:32:16 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Widgets - Embedding of Dataverse Metrics | Component: Dataverse General Info Priority: High Status: Triaged Type: Feature | @dancabral would like to embed the dataverse metrics onto the IQSS website.
Is is possible to add this data as a embeddable widget so that it can be updated continuously?
Specifically these elements:


If a JS embed is not practical, we could use a custom created iframe as long as the data is available at a URI.
| 1.0 | Widgets - Embedding of Dataverse Metrics - @dancabral would like to embed the dataverse metrics onto the IQSS website.
Is is possible to add this data as a embeddable widget so that it can be updated continuously?
Specifically these elements:


If a JS embed is not practical, we could use a custom created iframe as long as the data is available at a URI.
| priority | widgets embedding of dataverse metrics dancabral would like to embed the dataverse metrics onto the iqss website is is possible to add this data as a embeddable widget so that it can be updated continuously specifically these elements if a js embed is not practical we could use a custom created iframe as long as the data is available at a uri | 1 |
657,344 | 21,790,899,968 | IssuesEvent | 2022-05-14 22:08:46 | bounswe/bounswe2022group9 | https://api.github.com/repos/bounswe/bounswe2022group9 | closed | Practice App: Adding sign in functionality | Priority: High In Progress Practice Application | Deadline: 15.05.2022 23.59
TODO:
- [ ] Sign in function should be added to views.py.
- [ ] An html should be prepared for sign in. | 1.0 | Practice App: Adding sign in functionality - Deadline: 15.05.2022 23.59
TODO:
- [ ] Sign in function should be added to views.py.
- [ ] An html should be prepared for sign in. | priority | practice app adding sign in functionality deadline todo sign in function should be added to views py an html should be prepared for sign in | 1 |
123,469 | 4,863,427,937 | IssuesEvent | 2016-11-14 15:26:34 | mgoral/subconvert | https://api.github.com/repos/mgoral/subconvert | opened | Move to tox+pytest | High Priority Request | Just pretend that autotools never happened, ok?
One crazy thing is how we compile and install translations. I'm thinking about extracting these to a separate repository and handling their installation there (probably via autotools or some dead-simple script).
Another thing: to keep backward compatibility, subconvert should remove the old distribution from $PREFIX and link/install a start script and .desktop file. Or maybe it shouldn't do anything? | 1.0 | Move to tox+pytest - Just pretend that autotools never happened, ok?
One crazy thing is how we compile and install translations. I'm thinking about extracting these to a separate repository and handling their installation there (probably via autotools or some dead-simple script).
Another thing: to keep backward compatibility, subconvert should remove the old distribution from $PREFIX and link/install a start script and .desktop file. Or maybe it shouldn't do anything? | priority | move to tox pytest just pretend that autotools never happened ok one crazy thing is how we compile and install translations i m thinking about extracting these to a separate repository and handling their installation there probably via autotools or some dead simple script another thing to keep backward compatibility subconvert should remove the old distribution from prefix and link install a start script and desktop file or maybe it shouldn t do anything | 1 |
490,701 | 14,139,008,939 | IssuesEvent | 2020-11-10 09:15:24 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | Add Country attribute to SCIM2 user core dialect | Priority/Highest Severity/Major improvement | **Describe the issue:**
At the moment even though we have the attribute http://wso2.org/claims/country in the local dialect we don't have a SCIM attribute aligned with it. So it's better to add a new SCIM attribute and map it to the local attribute. | 1.0 | Add Country attribute to SCIM2 user core dialect - **Describe the issue:**
At the moment even though we have the attribute http://wso2.org/claims/country in the local dialect we don't have a SCIM attribute aligned with it. So it's better to add a new SCIM attribute and map it to the local attribute. | priority | add country attribute to user core dialect describe the issue at the moment even though we have the attribute in the local dialect we don t have a scim attribute aligned with it so it s better to add a new scim attribute and map it to the local attribute | 1 |
737,870 | 25,535,667,603 | IssuesEvent | 2022-11-29 11:47:18 | aau-giraf/web-api | https://api.github.com/repos/aau-giraf/web-api | closed | Integration tests change values in the database for localhost or live server | priority: high | ## Description
When running the integration tests more than once, the number of failed tests increases from 46 to 100+. The integration tests access the actual database used in production or what is used locally and change it. Since the tests fail, the database is changed, so it doesn't work as intended anymore.
**Possible Suggested Solution**
Change, so the tests run in an temporary database that will be deleted after the tests have been completed.
**This issue will be expanded upon when further information is discovered.** | 1.0 | Integration tests change values in the database for localhost or live server - ## Description
When running the integration tests more than once, the number of failed tests increases from 46 to 100+. The integration tests access the actual database used in production or what is used locally and change it. Since the tests fail, the database is changed, so it doesn't work as intended anymore.
**Possible Suggested Solution**
Change, so the tests run in an temporary database that will be deleted after the tests have been completed.
**This issue will be expanded upon when further information is discovered.** | priority | integration tests change values in the database for localhost or live server description when running the integration tests more than once the number of failed tests increases from to the integration tests access the actual database used in production or what is used locally and change it since the tests fail the database is changed so it doesn t work as intended anymore possible suggested solution change so the tests run in an temporary database that will be deleted after the tests have been completed this issue will be expanded upon when further information is discovered | 1 |
195,206 | 6,905,165,668 | IssuesEvent | 2017-11-27 05:16:17 | tkanezaki/phoenix | https://api.github.com/repos/tkanezaki/phoenix | closed | APIレスポンスのソート修正 | bug priority:high | 以下の通りになるよう修正
**GET /messages**
Message.weight (Desc),
Message.displayStartDate (Desc),
Message.messageID (Desc)
**GET /feeds, GET /merchandises, GET /leaflets**
Article.weight (Desc),
Article.displayStartDate (Desc),
Article.articleId (Desc)
**GET /feeds/clipped**
~ArticleClip.createdAt (Desc)~
ArticleClip.id (Desc)
**GET /feed-labelgroups, GET /merchandise-labelgroups**
LabelGroup.weight (Desc),
LabelGroup.labelGroupId (Asc),
[ Label.weight (Desc),
Label.labelId (Asc) ]
**GET /merchandise-categories**
Label.weight (Desc),
Label.labelId (Asc)
**GET /shops**
Shop.weight (Desc),
Shop.subdivisionISO (Asc),
Shop.shopId (Desc)
**GET /shops/clipped**
~ShopClip.createdAt (Desc)~
ShopClip.id (Desc)
**GET /shops/coodinates**
distance (Asc)
**GET /shops/bounds**
Shop.shopId (Asc)
**GET /shop-attributegroups**
AttributeGroup.weight (Desc),
AttributeGroup.attributeGroupId (Asc),
[ Attribute.weight (Desc),
Attribute.attributeId (Asc) ]
**GET /checkin-histories**
~CheckinHistory.createdAt (Desc)~
CheckinHistory.id (Desc)
**GET /coupons**
Coupon.weight (Desc),
Coupon.applicationStartDate (Desc)
Coupon.couponId (Desc)
**GET /coupon-categories**
CouponCategory.weight (Desc),
CouponCategory.couponCategoryId (Asc)
**GET /coupons/clipped**
~CouponClip.createdAt (Desc)~
CouponClip.id (Desc) | 1.0 | APIレスポンスのソート修正 - 以下の通りになるよう修正
**GET /messages**
Message.weight (Desc),
Message.displayStartDate (Desc),
Message.messageID (Desc)
**GET /feeds, GET /merchandises, GET /leaflets**
Article.weight (Desc),
Article.displayStartDate (Desc),
Article.articleId (Desc)
**GET /feeds/clipped**
~ArticleClip.createdAt (Desc)~
ArticleClip.id (Desc)
**GET /feed-labelgroups, GET /merchandise-labelgroups**
LabelGroup.weight (Desc),
LabelGroup.labelGroupId (Asc),
[ Label.weight (Desc),
Label.labelId (Asc) ]
**GET /merchandise-categories**
Label.weight (Desc),
Label.labelId (Asc)
**GET /shops**
Shop.weight (Desc),
Shop.subdivisionISO (Asc),
Shop.shopId (Desc)
**GET /shops/clipped**
~ShopClip.createdAt (Desc)~
ShopClip.id (Desc)
**GET /shops/coodinates**
distance (Asc)
**GET /shops/bounds**
Shop.shopId (Asc)
**GET /shop-attributegroups**
AttributeGroup.weight (Desc),
AttributeGroup.attributeGroupId (Asc),
[ Attribute.weight (Desc),
Attribute.attributeId (Asc) ]
**GET /checkin-histories**
~CheckinHistory.createdAt (Desc)~
CheckinHistory.id (Desc)
**GET /coupons**
Coupon.weight (Desc),
Coupon.applicationStartDate (Desc)
Coupon.couponId (Desc)
**GET /coupon-categories**
CouponCategory.weight (Desc),
CouponCategory.couponCategoryId (Asc)
**GET /coupons/clipped**
~CouponClip.createdAt (Desc)~
CouponClip.id (Desc) | priority | apiレスポンスのソート修正 以下の通りになるよう修正 get messages message weight desc message displaystartdate desc message messageid desc get feeds get merchandises get leaflets article weight desc article displaystartdate desc article articleid desc get feeds clipped articleclip createdat desc articleclip id desc get feed labelgroups get merchandise labelgroups labelgroup weight desc labelgroup labelgroupid asc label weight desc label labelid asc get merchandise categories label weight desc label labelid asc get shops shop weight desc shop subdivisioniso asc shop shopid desc get shops clipped shopclip createdat desc shopclip id desc get shops coodinates distance asc get shops bounds shop shopid asc get shop attributegroups attributegroup weight desc attributegroup attributegroupid asc attribute weight desc attribute attributeid asc get checkin histories checkinhistory createdat desc checkinhistory id desc get coupons coupon weight desc coupon applicationstartdate desc coupon couponid desc get coupon categories couponcategory weight desc couponcategory couponcategoryid asc get coupons clipped couponclip createdat desc couponclip id desc | 1 |
155,484 | 5,956,336,562 | IssuesEvent | 2017-05-28 15:52:20 | bhwarren/Sutta-Data-Manager | https://api.github.com/repos/bhwarren/Sutta-Data-Manager | opened | Change sutta selector to something more descriptive | priority: High Todo UI | Like SC. Also put most recently edited above everything for quick access | 1.0 | Change sutta selector to something more descriptive - Like SC. Also put most recently edited above everything for quick access | priority | change sutta selector to something more descriptive like sc also put most recently edited above everything for quick access | 1 |
443,052 | 12,758,807,561 | IssuesEvent | 2020-06-29 03:39:23 | Azure/ARO-RP | https://api.github.com/repos/Azure/ARO-RP | closed | c# serialisation bug | priority-high size-small | I don't have all the details, but when running a k8s createorupdate to update an object, I think I posted
```yaml
...
metadata:
generation: 2
...
```
and I got some error back from k8s that makes me think that the yaml->json c# incorrectly serialised that to `"metadata": {"generation": "2"}` (i.e. "2" as a string, not a number).
are you able to recreate this?
| 1.0 | c# serialisation bug - I don't have all the details, but when running a k8s createorupdate to update an object, I think I posted
```yaml
...
metadata:
generation: 2
...
```
and I got some error back from k8s that makes me think that the yaml->json c# incorrectly serialised that to `"metadata": {"generation": "2"}` (i.e. "2" as a string, not a number).
are you able to recreate this?
| priority | c serialisation bug i don t have all the details but when running a createorupdate to update an object i think i posted yaml metadata generation and i got some error back from that makes me think that the yaml json c incorrectly serialised that to metadata generation i e as a string not a number are you able to recreate this | 1 |
658,058 | 21,877,019,979 | IssuesEvent | 2022-05-19 11:07:31 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | closed | Can't instantiate VMTemplates when setting VM.instantiate_name to false | Category: Sunstone Type: Bug Sponsored Status: Accepted Priority: High | **Description**
When setting VM.instantiate_name to false, users can't instantiate VM templates.
**To Reproduce**
Go to the view YAML configuration file and set `VM.instantiate_name` to false.
**Expected behavior**
To be able to instantiate VMTemplates when this setting is disabled.
**Details**
- Affected Component: Sunstone
- Hypervisor: [e.g. KVM]
- Version: 6.4, development
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [x] Code commited
- [ ] Testing - QA
- [x] Documentation (Release notes - resolved issues, compatibility, known issues) | 1.0 | Can't instantiate VMTemplates when setting VM.instantiate_name to false - **Description**
When setting VM.instantiate_name to false, users can't instantiate VM templates.
**To Reproduce**
Go to the view YAML configuration file and set `VM.instantiate_name` to false.
**Expected behavior**
To be able to instantiate VMTemplates when this setting is disabled.
**Details**
- Affected Component: Sunstone
- Hypervisor: [e.g. KVM]
- Version: 6.4, development
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [x] Code commited
- [ ] Testing - QA
- [x] Documentation (Release notes - resolved issues, compatibility, known issues) | priority | can t instantiate vmtemplates when setting vm instantiate name to false description when setting vm instantiate name to false users can t instantiate vm templates to reproduce go to the view yaml configuration file and set vm instantiate name to false expected behavior to be able to instantiate vmtemplates when this setting is disabled details affected component sunstone hypervisor version development progress status code commited testing qa documentation release notes resolved issues compatibility known issues | 1 |
443,414 | 12,794,123,254 | IssuesEvent | 2020-07-02 06:09:01 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Need a load API Policies to memory for subscription validation | Priority/Highest Type/Improvement | ### Describe your problem(s)
To cater in memory subscription validation, We need a separate REST API to retrieve API Policies and need to store them in a Map. | 1.0 | Need a load API Policies to memory for subscription validation - ### Describe your problem(s)
To cater in memory subscription validation, We need a separate REST API to retrieve API Policies and need to store them in a Map. | priority | need a load api policies to memory for subscription validation describe your problem s to cater in memory subscription validation we need a separate rest api to retrieve api policies and need to store them in a map | 1 |
719,043 | 24,743,671,152 | IssuesEvent | 2022-10-21 07:56:40 | AY2223S1-CS2103T-W13-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-W13-1/tp | closed | Update Developer Guide | type.DG priority.high | Update implementation details for each user.
- [ ] Po-Hsien
- [ ] Bao Bin
- [ ] Zizheng
- [ ] Sheyuan
- [ ] Silas | 1.0 | Update Developer Guide - Update implementation details for each user.
- [ ] Po-Hsien
- [ ] Bao Bin
- [ ] Zizheng
- [ ] Sheyuan
- [ ] Silas | priority | update developer guide update implementation details for each user po hsien bao bin zizheng sheyuan silas | 1 |
273,606 | 8,550,783,126 | IssuesEvent | 2018-11-07 16:20:57 | CypherpunkArmory/UserLAnd | https://api.github.com/repos/CypherpunkArmory/UserLAnd | closed | unable to start userland | high priority | userland 0.3.4 (17)
Samsung Galaxy Tab 2
Android 7.1.2
LineageOS 14.1-20180131-UNOFFICIAL-espressowifi
While loading the assets Userland always hangs on the part about busybox:
"Extracting: Exec: Failed to execute command [../support/busybox, sh, -c, ../support/execInPro..."
Userland then just hangs, and I have to manually end it.
The PRoot debug log is attached.
[PRoot_Debug_Log.txt](https://github.com/CypherpunkArmory/UserLAnd/files/2346830/PRoot_Debug_Log.txt)
| 1.0 | unable to start userland - userland 0.3.4 (17)
Samsung Galaxy Tab 2
Android 7.1.2
LineageOS 14.1-20180131-UNOFFICIAL-espressowifi
While loading the assets Userland always hangs on the part about busybox:
"Extracting: Exec: Failed to execute command [../support/busybox, sh, -c, ../support/execInPro..."
Userland then just hangs, and I have to manually end it.
The PRoot debug log is attached.
[PRoot_Debug_Log.txt](https://github.com/CypherpunkArmory/UserLAnd/files/2346830/PRoot_Debug_Log.txt)
| priority | unable to start userland userland samsung galaxy tab android lineageos unofficial espressowifi while loading the assets userland always hangs on the part about busybox extracting exec failed to execute command support busybox sh c support execinpro userland then just hangs and i have to manually end it the proot debug log is attached | 1 |
616,401 | 19,301,650,733 | IssuesEvent | 2021-12-13 06:41:28 | OpenTabletDriver/OpenTabletDriver | https://api.github.com/repos/OpenTabletDriver/OpenTabletDriver | closed | Valid inputs being dropped - Hover distance and area issues | bug priority:high | ## Description
<!-- Describe the issue below -->
Many valid inputs are being ignored leading certain tablets to have their input cut off before it should be. This can lead to lowered hover distance, and certain parts of areas (specifically some parts of the area deadzones on wacom tablets) being unreachable or having input cut off too soon.
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| OpenTabletDriver Version | 0.6.0 Pre-release
| Tablet | Tested on CTL-480 but many others likely affected
| 1.0 | Valid inputs being dropped - Hover distance and area issues - ## Description
<!-- Describe the issue below -->
Many valid inputs are being ignored leading certain tablets to have their input cut off before it should be. This can lead to lowered hover distance, and certain parts of areas (specifically some parts of the area deadzones on wacom tablets) being unreachable or having input cut off too soon.
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| OpenTabletDriver Version | 0.6.0 Pre-release
| Tablet | Tested on CTL-480 but many others likely affected
| priority | valid inputs being dropped hover distance and area issues description many valid inputs are being ignored leading certain tablets to have their input cut off before it should be this can lead to lowered hover distance and certain parts of areas specifically some parts of the area deadzones on wacom tablets being unreachable or having input cut off too soon system information name value opentabletdriver version pre release tablet tested on ctl but many others likely affected | 1 |
633,201 | 20,247,686,467 | IssuesEvent | 2022-02-14 15:08:25 | VulcanWM/munity | https://api.github.com/repos/VulcanWM/munity | closed | Lyrics | TYPE: bug PRIORITY: high PROGRESS: completed | - ~~Remove the `embed` and the number in the lyric~~
- ~~Remove the one word lyrics~~
- ~~Remove the songs with edit in it~~ | 1.0 | Lyrics - - ~~Remove the `embed` and the number in the lyric~~
- ~~Remove the one word lyrics~~
- ~~Remove the songs with edit in it~~ | priority | lyrics remove the embed and the number in the lyric remove the one word lyrics remove the songs with edit in it | 1 |
116,711 | 4,705,609,065 | IssuesEvent | 2016-10-13 14:56:52 | geosolutions-it/evo-odas | https://api.github.com/repos/geosolutions-it/evo-odas | closed | ImageMosaic date ingestion from landsat files | duplicate Priority: High task | Landsat images filenames follow this pattern:
LC81390452014295LGN00
Where 2014295 is the julian day (format=YYYYDDD). This needs to be ingested correctly in the mosaic as time dimension | 1.0 | ImageMosaic date ingestion from landsat files - Landsat images filenames follow this pattern:
LC81390452014295LGN00
Where 2014295 is the julian day (format=YYYYDDD). This needs to be ingested correctly in the mosaic as time dimension | priority | imagemosaic date ingestion from landsat files landsat images filenames follow this pattern where is the julian day format yyyyddd this needs to be ingested correctly in the mosaic as time dimension | 1 |
230,434 | 7,610,142,021 | IssuesEvent | 2018-05-01 06:05:08 | connectedbusiness/eshopconnected | https://api.github.com/repos/connectedbusiness/eshopconnected | closed | Magento: Downloaded orders that are already completed | bug high priority | #151904 - eShopConnected downloaded orders that are already completed in Magento
Details: After applying the patch for eShopConnected, it started to download the orders from Magento however it downloaded ALL the orders including the COMPLETED orders instead of OPEN orders only.
David Nelson
www.dynenttech.com | 1.0 | Magento: Downloaded orders that are already completed - #151904 - eShopConnected downloaded orders that are already completed in Magento
Details: After applying the patch for eShopConnected, it started to download the orders from Magento however it downloaded ALL the orders including the COMPLETED orders instead of OPEN orders only.
David Nelson
www.dynenttech.com | priority | magento downloaded orders that are already completed eshopconnected downloaded orders that are already completed in magento details after applying the patch for eshopconnected it started to download the orders from magento however it downloaded all the orders including the completed orders instead of open orders only david nelson | 1 |
807,927 | 30,025,568,117 | IssuesEvent | 2023-06-27 05:42:49 | EESSI/eessi-bot-software-layer | https://api.github.com/repos/EESSI/eessi-bot-software-layer | reopened | job manager crash due to "Bad credentials" | difficulty:medium priority:high bug | We should make the communication with GitHub in `process_running_jobs` in the job manager a bit more robust, and retry in case something went wrong?
This could be some kind of rate limiting thing in GitHub (the job manager doesn't use a GitHub token)
```
Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/mnt/shared/home/bot/eessi-bot-software-layer/eessi_bot_job_manager.py", line 715, in <module>
main()
File "/mnt/shared/home/bot/eessi-bot-software-layer/eessi_bot_job_manager.py", line 680, in main
job_manager.process_running_jobs(known_jobs[rj])
File "/mnt/shared/home/bot/eessi-bot-software-layer/eessi_bot_job_manager.py", line 351, in process_running_jobs
pullrequest = repo.get_pull(int(pr_number))
File "/mnt/shared/home/bot/.local/lib/python3.6/site-packages/github/Repository.py", line 2792, in get_pull
"GET", f"{self.url}/pulls/{number}"
File "/mnt/shared/home/bot/.local/lib/python3.6/site-packages/github/Requester.py", line 355, in requestJsonAndCheck
verb, url, parameters, headers, input, self.__customConnection(url)
File "/mnt/shared/home/bot/.local/lib/python3.6/site-packages/github/Requester.py", line 378, in __check
raise self.__createException(status, responseHeaders, output)
github.GithubException.BadCredentialsException: 401 {"message": "Bad credentials", "documentation_url": "https://docs.github.com/rest"}
```
tail of job manager log file when crash happened:
```
[20230115-T00:13:37] job manager main loop: iteration 3097
[20230115-T00:13:37] job manager main loop: known_jobs='3311'
[20230115-T00:13:37] run_subprocess(): 'get_current_jobs(): squeue command' by running '/usr/bin/squeue --long --user=bot' in directory '/mnt/shared/home/bot/eessi-bot-software-layer'
[20230115-T00:13:37] run_cmd(): Result for running '/usr/bin/squeue --long --user=bot' in 'None
stdout 'Sun Jan 15 00:13:37 2023
JOBID PARTITION NAME USER STATE TIME TIME_LIMI NODES NODELIST(REASON)
3311 compute eessi-bo bot RUNNING 9:02:11 UNLIMITED 1 fair-mastodon-c5-2xlarge-0001
'
stderr ''
exit code 0
[20230115-T00:13:37] job manager main loop: current_jobs='3311'
[20230115-T00:13:37] job manager main loop: new_jobs=''
[20230115-T00:13:37] job manager main loop: running_jobs='3311'
[20230115-T00:13:37] Found metadata file at /mnt/shared/home/bot/eessi-bot-software-layer/jobs/submitted/3311/_bot_job3311.metadata
``` | 1.0 | job manager crash due to "Bad credentials" - We should make the communication with GitHub in `process_running_jobs` in the job manager a bit more robust, and retry in case something went wrong?
This could be some kind of rate limiting thing in GitHub (the job manager doesn't use a GitHub token)
```
Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/mnt/shared/home/bot/eessi-bot-software-layer/eessi_bot_job_manager.py", line 715, in <module>
main()
File "/mnt/shared/home/bot/eessi-bot-software-layer/eessi_bot_job_manager.py", line 680, in main
job_manager.process_running_jobs(known_jobs[rj])
File "/mnt/shared/home/bot/eessi-bot-software-layer/eessi_bot_job_manager.py", line 351, in process_running_jobs
pullrequest = repo.get_pull(int(pr_number))
File "/mnt/shared/home/bot/.local/lib/python3.6/site-packages/github/Repository.py", line 2792, in get_pull
"GET", f"{self.url}/pulls/{number}"
File "/mnt/shared/home/bot/.local/lib/python3.6/site-packages/github/Requester.py", line 355, in requestJsonAndCheck
verb, url, parameters, headers, input, self.__customConnection(url)
File "/mnt/shared/home/bot/.local/lib/python3.6/site-packages/github/Requester.py", line 378, in __check
raise self.__createException(status, responseHeaders, output)
github.GithubException.BadCredentialsException: 401 {"message": "Bad credentials", "documentation_url": "https://docs.github.com/rest"}
```
tail of job manager log file when crash happened:
```
[20230115-T00:13:37] job manager main loop: iteration 3097
[20230115-T00:13:37] job manager main loop: known_jobs='3311'
[20230115-T00:13:37] run_subprocess(): 'get_current_jobs(): squeue command' by running '/usr/bin/squeue --long --user=bot' in directory '/mnt/shared/home/bot/eessi-bot-software-layer'
[20230115-T00:13:37] run_cmd(): Result for running '/usr/bin/squeue --long --user=bot' in 'None
stdout 'Sun Jan 15 00:13:37 2023
JOBID PARTITION NAME USER STATE TIME TIME_LIMI NODES NODELIST(REASON)
3311 compute eessi-bo bot RUNNING 9:02:11 UNLIMITED 1 fair-mastodon-c5-2xlarge-0001
'
stderr ''
exit code 0
[20230115-T00:13:37] job manager main loop: current_jobs='3311'
[20230115-T00:13:37] job manager main loop: new_jobs=''
[20230115-T00:13:37] job manager main loop: running_jobs='3311'
[20230115-T00:13:37] Found metadata file at /mnt/shared/home/bot/eessi-bot-software-layer/jobs/submitted/3311/_bot_job3311.metadata
``` | priority | job manager crash due to bad credentials we should make the communication with github in process running jobs in the job manager a bit more robust and retry in case something went wrong this could be some kind of rate limiting thing in github the job manager doesn t use a github token traceback most recent call last file usr runpy py line in run module as main main mod spec file usr runpy py line in run code exec code run globals file mnt shared home bot eessi bot software layer eessi bot job manager py line in main file mnt shared home bot eessi bot software layer eessi bot job manager py line in main job manager process running jobs known jobs file mnt shared home bot eessi bot software layer eessi bot job manager py line in process running jobs pullrequest repo get pull int pr number file mnt shared home bot local lib site packages github repository py line in get pull get f self url pulls number file mnt shared home bot local lib site packages github requester py line in requestjsonandcheck verb url parameters headers input self customconnection url file mnt shared home bot local lib site packages github requester py line in check raise self createexception status responseheaders output github githubexception badcredentialsexception message bad credentials documentation url tail of job manager log file when crash happened job manager main loop iteration job manager main loop known jobs run subprocess get current jobs squeue command by running usr bin squeue long user bot in directory mnt shared home bot eessi bot software layer run cmd result for running usr bin squeue long user bot in none stdout sun jan jobid partition name user state time time limi nodes nodelist reason compute eessi bo bot running unlimited fair mastodon stderr exit code job manager main loop current jobs job manager main loop new jobs job manager main loop running jobs found metadata file at mnt shared home bot eessi bot software layer jobs submitted bot metadata | 1 |
380,104 | 11,253,901,857 | IssuesEvent | 2020-01-11 19:30:44 | CodeletApp/codelet-app | https://api.github.com/repos/CodeletApp/codelet-app | closed | Question Process: Step 1 | High Priority | - What sub-components/features can we split the question list view into?
- The sub-components should be reusable for any question data.
### Sub Components
- Approaches.js
- Footer/Next Button | 1.0 | Question Process: Step 1 - - What sub-components/features can we split the question list view into?
- The sub-components should be reusable for any question data.
### Sub Components
- Approaches.js
- Footer/Next Button | priority | question process step what sub components features can we split the question list view into the sub components should be reusable for any question data sub components approaches js footer next button | 1 |
144,497 | 5,541,476,739 | IssuesEvent | 2017-03-22 12:58:42 | Osslack/HANA_SSBM | https://api.github.com/repos/Osslack/HANA_SSBM | closed | Allgemeine Informationen | Priority_high | Allgemeine statistische informationen auswerten:
- Durchschnitt
- Median
- Minimum
- Maximum
- Standard Abweichung
- Insgesamt | 1.0 | Allgemeine Informationen - Allgemeine statistische informationen auswerten:
- Durchschnitt
- Median
- Minimum
- Maximum
- Standard Abweichung
- Insgesamt | priority | allgemeine informationen allgemeine statistische informationen auswerten durchschnitt median minimum maximum standard abweichung insgesamt | 1 |
624,870 | 19,711,228,382 | IssuesEvent | 2022-01-13 05:42:31 | rich-iannone/pointblank | https://api.github.com/repos/rich-iannone/pointblank | closed | `col_vals_XXX` fails on SQL Server connection | Type: ☹︎ Bug Difficulty: [3] Advanced Effort: [3] High Priority: [3] High | ## Prework
* [x] Read and agree to the [code of conduct](https://www.contributor-covenant.org/version/2/0/code_of_conduct/) and [contributing guidelines](https://github.com/rich-iannone/pointblank/blob/master/.github/CONTRIBUTING.md).
* [x] If there is [already a relevant issue](https://github.com/rich-iannone/pointblank/issues), whether open or closed, comment on the existing thread instead of posting a new issue.
* [ ] Post a [minimal reproducible example](https://www.tidyverse.org/help/) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [ ] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [ ] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
## Description
Connecting to one of our databases and trying to run `interrrogate()` throws an error when using the `col_vals_XXX` family of functions. So in example below, the `col_is_date()` runs fine, but `col_vals_between()` does not. I can't really figure out why?
If I collect the needed columns using `dplyr::collect()` and run the exact same chunk everything works fine. Same goes for a similar interrogation on the `small_table_duckdb`.
NOTE: I'm aware that the example is a pseudo reprex - I included it here, so you can see what I did. If there is anyway I can improve the example code, please just let me know!
## Reproducible example
``` r
library(pointblank)
small_table_duckdb <-
db_tbl(
table = small_table,
dbname = ":memory:",
dbtype = "duckdb"
)
stdb_agent <- create_agent(read_fn = ~ small_table_duckdb) |>
col_is_character(vars(b)) |>
col_vals_between(vars(a), left = 0, right = 1000000) |>
interrogate()
con <- DBI::dbConnect(odbc::odbc(),
driver = "ODBC Driver 17 for SQL Server",
server = <OUR SERVER>,
database = <OUR DATABASE>,
uid = <THE UID>,
pwd = <THE PWD>)
kemi <- dplyr::tbl(con, dbplyr::in_schema("data", "kemi"))
kemi_agent <- create_agent(read_fn = ~ kemi) |>
col_is_date(vars(dato)) |>
col_vals_between(
columns = vars(jord_C),
left = 0.1,
right = 30,
na_pass = TRUE
) |>
interrogate()
```
#> Error: nanodbc/nanodbc.cpp:1655: 00000: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]An expression of non-boolean type specified in a context where a condition is expected, near ')'. [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'jord_C'. [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared.
#> <SQL> 'SELECT "n"
#> FROM (SELECT COUNT(*) AS "n"
#> FROM (SELECT "progId", "aktId", "dato", "sted", "avneknippe_5m", "befaestet_areal_15m", "biomasse_N", "biomasse_torvaegt", "bladbille_5m", "blottet_mineraljord_5m", "bredbladede_urter_5m", "dvaergbuske_5m", "enebaer_5m", "fornelag_tykkelse_1", "fornelag_tykkelse_2", "fornelag_tykkelse_3", "fornelag_tykkelse_4", "graaris_5m", "graesser_5m", "halvgraesser_siv_og_frytle_5m", "havtorn_5m", "hegnet_areal_15m", "holjer_5m", "hulheder_15m", "humuslag_tykkelse_1", "humuslag_tykkelse_2", "humuslag_tykkelse_3", "humuslag_tykkelse_4", "jord_C", "jord_C_under_detektionsgraensen", "jord_N", "jord_N_under_detektionsgraensen", "jord_basemaetning", "jord_fosfortal", "jord_fosfortal_under_detektionsgraensen", "jord_ph", "jordprove_torvaegt", "klokkelyng_5m", "lichener_5m", "lysforhold_1", "lysforhold_2", "lysforhold_3", "lysforhold_4", "morlagstykkelse_1", "morlagstykkelse_2", "morlagstykkelse_3", "morlagstykkelse_4", "mosser_5m", "naturtype", "plante_N", "plante_N_art", "plante_P", "raad_15m", "trunter_15m", "vadegraes_5m", "vand_NH4", "vand_NH4_N_under_detektionsgraensen", "vand_NO3", "vand_NO3_N_under_detektionsgraensen", "vand_PO4", "vand_PO4_P_under_detektionsgraensen", "vand_ledningsevne", "vand_ph", "vandflade_5m", "vandstand_pejling", "vedplanter_over_1m", "vedplanter_samlet", "vedplanter_under_1m", "vegetationshojde_1", "vegetationshojde_2", "vegetationshojde_3", "vegetationshojde_4", CASE
#> WHEN ("jord_C" >= 0.1 AND "jord_C" <= 30.0) THEN (TRUE)
#> WHEN ("jord_C" < 0.1 OR "jord_C" > 30.0) THEN (FALSE)
#> WHEN ((("jord_C") IS NULL) AND TRUE) THEN (TRUE)
#> WHEN ((("jord_C") IS NULL) AND TRUE = FALSE) THEN (FALSE)
#> END AS "pb_is_good_"
#> FROM "data"."kemi") "q01") "q02"'
``` r
kemi |>
dplyr::select(dato, jord_C) |>
dplyr::collect() -> kemi_local
kemi_agent <- create_agent(read_fn = ~ kemi_local) |>
col_is_date(vars(dato)) |>
col_vals_between(
columns = vars(jord_C),
left = 0.1,
right = 30,
na_pass = TRUE
) |>
interrogate()
```
<sup>Created on 2021-09-08 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
```
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.1.1 (2021-08-10)
#> os Ubuntu 20.04.3 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language en_US:en
#> collate en_US.UTF-8
#> ctype en_US.UTF-8
#> tz Europe/Copenhagen
#> date 2021-09-08
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date lib
#> assertthat 0.2.1 2019-03-21 [1]
#> backports 1.2.1 2020-12-09 [1]
#> bit 4.0.4 2020-08-04 [1]
#> bit64 4.0.5 2020-08-30 [1]
#> blastula 0.3.2 2020-05-19 [1]
#> blob 1.2.2 2021-07-23 [1]
#> cli 3.0.1 2021-07-17 [3]
#> crayon 1.4.1 2021-02-08 [3]
#> DBI 1.1.1 2021-01-15 [1]
#> dbplyr 2.1.1 2021-04-06 [1]
#> digest 0.6.27 2020-10-24 [3]
#> dplyr 1.0.7 2021-06-18 [1]
#> duckdb 0.2.9 2021-09-06 [1]
#> ellipsis 0.3.2 2021-04-29 [3]
#> evaluate 0.14 2019-05-28 [3]
#> fansi 0.5.0 2021-05-25 [1]
#> fastmap 1.1.0 2021-01-25 [3]
#> fs 1.5.0 2020-07-31 [1]
#> generics 0.1.0 2020-10-31 [1]
#> glue 1.4.2 2020-08-27 [3]
#> highr 0.9 2021-04-16 [3]
#> hms 1.1.0 2021-05-17 [1]
#> htmltools 0.5.2 2021-08-25 [1]
#> knitr 1.33 2021-04-24 [3]
#> lifecycle 1.0.0 2021-02-15 [3]
#> magrittr 2.0.1 2020-11-17 [3]
#> odbc 1.3.2 2021-04-03 [1]
#> pillar 1.6.2 2021-07-29 [3]
#> pkgconfig 2.0.3 2019-09-22 [3]
#> pointblank * 0.8.0.9000 2021-09-08 [1]
#> purrr 0.3.4 2020-04-17 [3]
#> R6 2.5.1 2021-08-19 [3]
#> Rcpp 1.0.7 2021-07-07 [1]
#> reprex 2.0.1 2021-08-05 [1]
#> rlang 0.4.11 2021-04-30 [3]
#> rmarkdown 2.10 2021-08-06 [1]
#> rstudioapi 0.13 2020-11-12 [3]
#> sessioninfo 1.1.1 2018-11-05 [3]
#> stringi 1.7.4 2021-08-25 [1]
#> stringr 1.4.0 2019-02-10 [3]
#> styler 1.5.1 2021-07-13 [1]
#> tibble 3.1.4 2021-08-25 [1]
#> tidyselect 1.1.1 2021-04-30 [1]
#> utf8 1.2.2 2021-07-24 [3]
#> vctrs 0.3.8 2021-04-29 [3]
#> withr 2.4.2 2021-04-18 [3]
#> xfun 0.25 2021-08-06 [1]
#> yaml 2.2.1 2020-02-01 [3]
#> source
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.0.5)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.2)
#> CRAN (R 4.0.5)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.0.5)
#> CRAN (R 4.0.4)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.0)
#> Github (rich-iannone/pointblank@9396547)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.5)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.5)
#> CRAN (R 4.0.5)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.0)
#>
#> [1] /home/au206907/R/x86_64-pc-linux-gnu-library/4.1
#> [2] /usr/local/lib/R/site-library
#> [3] /usr/lib/R/site-library
#> [4] /usr/lib/R/library
</details>
## Expected result
I expected the interrogation to be carried out just like for the other cases.
| 1.0 | `col_vals_XXX` fails on SQL Server connection - ## Prework
* [x] Read and agree to the [code of conduct](https://www.contributor-covenant.org/version/2/0/code_of_conduct/) and [contributing guidelines](https://github.com/rich-iannone/pointblank/blob/master/.github/CONTRIBUTING.md).
* [x] If there is [already a relevant issue](https://github.com/rich-iannone/pointblank/issues), whether open or closed, comment on the existing thread instead of posting a new issue.
* [ ] Post a [minimal reproducible example](https://www.tidyverse.org/help/) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [ ] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [ ] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
## Description
Connecting to one of our databases and trying to run `interrrogate()` throws an error when using the `col_vals_XXX` family of functions. So in example below, the `col_is_date()` runs fine, but `col_vals_between()` does not. I can't really figure out why?
If I collect the needed columns using `dplyr::collect()` and run the exact same chunk everything works fine. Same goes for a similar interrogation on the `small_table_duckdb`.
NOTE: I'm aware that the example is a pseudo reprex - I included it here, so you can see what I did. If there is anyway I can improve the example code, please just let me know!
## Reproducible example
``` r
library(pointblank)
small_table_duckdb <-
db_tbl(
table = small_table,
dbname = ":memory:",
dbtype = "duckdb"
)
stdb_agent <- create_agent(read_fn = ~ small_table_duckdb) |>
col_is_character(vars(b)) |>
col_vals_between(vars(a), left = 0, right = 1000000) |>
interrogate()
con <- DBI::dbConnect(odbc::odbc(),
driver = "ODBC Driver 17 for SQL Server",
server = <OUR SERVER>,
database = <OUR DATABASE>,
uid = <THE UID>,
pwd = <THE PWD>)
kemi <- dplyr::tbl(con, dbplyr::in_schema("data", "kemi"))
kemi_agent <- create_agent(read_fn = ~ kemi) |>
col_is_date(vars(dato)) |>
col_vals_between(
columns = vars(jord_C),
left = 0.1,
right = 30,
na_pass = TRUE
) |>
interrogate()
```
#> Error: nanodbc/nanodbc.cpp:1655: 00000: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]An expression of non-boolean type specified in a context where a condition is expected, near ')'. [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'jord_C'. [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared.
#> <SQL> 'SELECT "n"
#> FROM (SELECT COUNT(*) AS "n"
#> FROM (SELECT "progId", "aktId", "dato", "sted", "avneknippe_5m", "befaestet_areal_15m", "biomasse_N", "biomasse_torvaegt", "bladbille_5m", "blottet_mineraljord_5m", "bredbladede_urter_5m", "dvaergbuske_5m", "enebaer_5m", "fornelag_tykkelse_1", "fornelag_tykkelse_2", "fornelag_tykkelse_3", "fornelag_tykkelse_4", "graaris_5m", "graesser_5m", "halvgraesser_siv_og_frytle_5m", "havtorn_5m", "hegnet_areal_15m", "holjer_5m", "hulheder_15m", "humuslag_tykkelse_1", "humuslag_tykkelse_2", "humuslag_tykkelse_3", "humuslag_tykkelse_4", "jord_C", "jord_C_under_detektionsgraensen", "jord_N", "jord_N_under_detektionsgraensen", "jord_basemaetning", "jord_fosfortal", "jord_fosfortal_under_detektionsgraensen", "jord_ph", "jordprove_torvaegt", "klokkelyng_5m", "lichener_5m", "lysforhold_1", "lysforhold_2", "lysforhold_3", "lysforhold_4", "morlagstykkelse_1", "morlagstykkelse_2", "morlagstykkelse_3", "morlagstykkelse_4", "mosser_5m", "naturtype", "plante_N", "plante_N_art", "plante_P", "raad_15m", "trunter_15m", "vadegraes_5m", "vand_NH4", "vand_NH4_N_under_detektionsgraensen", "vand_NO3", "vand_NO3_N_under_detektionsgraensen", "vand_PO4", "vand_PO4_P_under_detektionsgraensen", "vand_ledningsevne", "vand_ph", "vandflade_5m", "vandstand_pejling", "vedplanter_over_1m", "vedplanter_samlet", "vedplanter_under_1m", "vegetationshojde_1", "vegetationshojde_2", "vegetationshojde_3", "vegetationshojde_4", CASE
#> WHEN ("jord_C" >= 0.1 AND "jord_C" <= 30.0) THEN (TRUE)
#> WHEN ("jord_C" < 0.1 OR "jord_C" > 30.0) THEN (FALSE)
#> WHEN ((("jord_C") IS NULL) AND TRUE) THEN (TRUE)
#> WHEN ((("jord_C") IS NULL) AND TRUE = FALSE) THEN (FALSE)
#> END AS "pb_is_good_"
#> FROM "data"."kemi") "q01") "q02"'
``` r
kemi |>
dplyr::select(dato, jord_C) |>
dplyr::collect() -> kemi_local
kemi_agent <- create_agent(read_fn = ~ kemi_local) |>
col_is_date(vars(dato)) |>
col_vals_between(
columns = vars(jord_C),
left = 0.1,
right = 30,
na_pass = TRUE
) |>
interrogate()
```
<sup>Created on 2021-09-08 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
```
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.1.1 (2021-08-10)
#> os Ubuntu 20.04.3 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language en_US:en
#> collate en_US.UTF-8
#> ctype en_US.UTF-8
#> tz Europe/Copenhagen
#> date 2021-09-08
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date lib
#> assertthat 0.2.1 2019-03-21 [1]
#> backports 1.2.1 2020-12-09 [1]
#> bit 4.0.4 2020-08-04 [1]
#> bit64 4.0.5 2020-08-30 [1]
#> blastula 0.3.2 2020-05-19 [1]
#> blob 1.2.2 2021-07-23 [1]
#> cli 3.0.1 2021-07-17 [3]
#> crayon 1.4.1 2021-02-08 [3]
#> DBI 1.1.1 2021-01-15 [1]
#> dbplyr 2.1.1 2021-04-06 [1]
#> digest 0.6.27 2020-10-24 [3]
#> dplyr 1.0.7 2021-06-18 [1]
#> duckdb 0.2.9 2021-09-06 [1]
#> ellipsis 0.3.2 2021-04-29 [3]
#> evaluate 0.14 2019-05-28 [3]
#> fansi 0.5.0 2021-05-25 [1]
#> fastmap 1.1.0 2021-01-25 [3]
#> fs 1.5.0 2020-07-31 [1]
#> generics 0.1.0 2020-10-31 [1]
#> glue 1.4.2 2020-08-27 [3]
#> highr 0.9 2021-04-16 [3]
#> hms 1.1.0 2021-05-17 [1]
#> htmltools 0.5.2 2021-08-25 [1]
#> knitr 1.33 2021-04-24 [3]
#> lifecycle 1.0.0 2021-02-15 [3]
#> magrittr 2.0.1 2020-11-17 [3]
#> odbc 1.3.2 2021-04-03 [1]
#> pillar 1.6.2 2021-07-29 [3]
#> pkgconfig 2.0.3 2019-09-22 [3]
#> pointblank * 0.8.0.9000 2021-09-08 [1]
#> purrr 0.3.4 2020-04-17 [3]
#> R6 2.5.1 2021-08-19 [3]
#> Rcpp 1.0.7 2021-07-07 [1]
#> reprex 2.0.1 2021-08-05 [1]
#> rlang 0.4.11 2021-04-30 [3]
#> rmarkdown 2.10 2021-08-06 [1]
#> rstudioapi 0.13 2020-11-12 [3]
#> sessioninfo 1.1.1 2018-11-05 [3]
#> stringi 1.7.4 2021-08-25 [1]
#> stringr 1.4.0 2019-02-10 [3]
#> styler 1.5.1 2021-07-13 [1]
#> tibble 3.1.4 2021-08-25 [1]
#> tidyselect 1.1.1 2021-04-30 [1]
#> utf8 1.2.2 2021-07-24 [3]
#> vctrs 0.3.8 2021-04-29 [3]
#> withr 2.4.2 2021-04-18 [3]
#> xfun 0.25 2021-08-06 [1]
#> yaml 2.2.1 2020-02-01 [3]
#> source
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.0.5)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.2)
#> CRAN (R 4.0.5)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.0.5)
#> CRAN (R 4.0.4)
#> CRAN (R 4.0.3)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.0)
#> Github (rich-iannone/pointblank@9396547)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.5)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.3)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.0.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.1)
#> CRAN (R 4.1.0)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.5)
#> CRAN (R 4.0.5)
#> CRAN (R 4.1.0)
#> CRAN (R 4.0.0)
#>
#> [1] /home/au206907/R/x86_64-pc-linux-gnu-library/4.1
#> [2] /usr/local/lib/R/site-library
#> [3] /usr/lib/R/site-library
#> [4] /usr/lib/R/library
</details>
## Expected result
I expected the interrogation to be carried out just like for the other cases.
| priority | col vals xxx fails on sql server connection prework read and agree to the and if there is whether open or closed comment on the existing thread instead of posting a new issue post a so the maintainer can troubleshoot the problems you identify a reproducible example is runnable post enough r code and data so any onlooker can create the error on their own computer minimal reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand readable format your code according to the description connecting to one of our databases and trying to run interrrogate throws an error when using the col vals xxx family of functions so in example below the col is date runs fine but col vals between does not i can t really figure out why if i collect the needed columns using dplyr collect and run the exact same chunk everything works fine same goes for a similar interrogation on the small table duckdb note i m aware that the example is a pseudo reprex i included it here so you can see what i did if there is anyway i can improve the example code please just let me know reproducible example r library pointblank small table duckdb db tbl table small table dbname memory dbtype duckdb stdb agent col is character vars b col vals between vars a left right interrogate con dbi dbconnect odbc odbc driver odbc driver for sql server server database uid pwd kemi dplyr tbl con dbplyr in schema data kemi kemi agent col is date vars dato col vals between columns vars jord c left right na pass true interrogate error nanodbc nanodbc cpp an expression of non boolean type specified in a context where a condition is expected near incorrect syntax near jord c statement s could not be prepared select n from select count as n from select progid aktid dato sted avneknippe befaestet areal biomasse n biomasse torvaegt bladbille blottet mineraljord bredbladede urter dvaergbuske enebaer fornelag tykkelse fornelag tykkelse fornelag tykkelse fornelag tykkelse graaris graesser halvgraesser siv og frytle havtorn hegnet areal holjer hulheder humuslag tykkelse humuslag tykkelse humuslag tykkelse humuslag tykkelse jord c jord c under detektionsgraensen jord n jord n under detektionsgraensen jord basemaetning jord fosfortal jord fosfortal under detektionsgraensen jord ph jordprove torvaegt klokkelyng lichener lysforhold lysforhold lysforhold lysforhold morlagstykkelse morlagstykkelse morlagstykkelse morlagstykkelse mosser naturtype plante n plante n art plante p raad trunter vadegraes vand vand n under detektionsgraensen vand vand n under detektionsgraensen vand vand p under detektionsgraensen vand ledningsevne vand ph vandflade vandstand pejling vedplanter over vedplanter samlet vedplanter under vegetationshojde vegetationshojde vegetationshojde vegetationshojde case when jord c and jord c then true when jord c then false when jord c is null and true then true when jord c is null and true false then false end as pb is good from data kemi r kemi dplyr select dato jord c dplyr collect kemi local kemi agent col is date vars dato col vals between columns vars jord c left right na pass true interrogate created on by the session info r sessioninfo session info ─ session info ─────────────────────────────────────────────────────────────── setting value version r version os ubuntu lts system linux gnu ui language en us en collate en us utf ctype en us utf tz europe copenhagen date ─ packages ─────────────────────────────────────────────────────────────────── package version date lib assertthat backports bit blastula blob cli crayon dbi dbplyr digest dplyr duckdb ellipsis evaluate fansi fastmap fs generics glue highr hms htmltools knitr lifecycle magrittr odbc pillar pkgconfig pointblank purrr rcpp reprex rlang rmarkdown rstudioapi sessioninfo stringi stringr styler tibble tidyselect vctrs withr xfun yaml source cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r github rich iannone pointblank cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r cran r home r pc linux gnu library usr local lib r site library usr lib r site library usr lib r library expected result i expected the interrogation to be carried out just like for the other cases | 1 |
742,244 | 25,844,946,118 | IssuesEvent | 2022-12-13 05:20:37 | adanvdo/YT-RED-UI | https://api.github.com/repos/adanvdo/YT-RED-UI | closed | Do not record an error log or show dialog if download is cancelled | bug fixed High Priority | Currently, if a download is cancelled yt-red will display a dialog with the exception message "A task was cancelled"
These exceptions should be ignored if they are invoked by the user | 1.0 | Do not record an error log or show dialog if download is cancelled - Currently, if a download is cancelled yt-red will display a dialog with the exception message "A task was cancelled"
These exceptions should be ignored if they are invoked by the user | priority | do not record an error log or show dialog if download is cancelled currently if a download is cancelled yt red will display a dialog with the exception message a task was cancelled these exceptions should be ignored if they are invoked by the user | 1 |
192,410 | 6,849,670,559 | IssuesEvent | 2017-11-13 23:01:06 | douira/resolution-editor | https://api.github.com/repos/douira/resolution-editor | opened | Display if a resolution has declared important question | high priority | in meta block "Vote Results". TODO item. | 1.0 | Display if a resolution has declared important question - in meta block "Vote Results". TODO item. | priority | display if a resolution has declared important question in meta block vote results todo item | 1 |
316,172 | 9,637,883,553 | IssuesEvent | 2019-05-16 09:48:19 | gzinck/des | https://api.github.com/repos/gzinck/des | closed | Accommodate changes in arena construction | high priority | Four major changes:
1. Controllable/uncontrollable events must be considered; if uncontrollable from controller's perspective initially, we have a chain reaction of bad states.
2. Unobservable events should go to another system state (in V2) with same control policy and state estimates, but a new actual state.
3. Each state should also have an accompanying actual state, q.
4. Admissible control policies must always allow uncontrollable events from a given state. | 1.0 | Accommodate changes in arena construction - Four major changes:
1. Controllable/uncontrollable events must be considered; if uncontrollable from controller's perspective initially, we have a chain reaction of bad states.
2. Unobservable events should go to another system state (in V2) with same control policy and state estimates, but a new actual state.
3. Each state should also have an accompanying actual state, q.
4. Admissible control policies must always allow uncontrollable events from a given state. | priority | accommodate changes in arena construction four major changes controllable uncontrollable events must be considered if uncontrollable from controller s perspective initially we have a chain reaction of bad states unobservable events should go to another system state in with same control policy and state estimates but a new actual state each state should also have an accompanying actual state q admissible control policies must always allow uncontrollable events from a given state | 1 |
85,023 | 3,683,872,311 | IssuesEvent | 2016-02-24 15:33:51 | ufal/lindat-dspace | https://api.github.com/repos/ufal/lindat-dspace | closed | Submitters can't edit metadata | bug high priority | Neither the collection wide nor selective setting allowing submitters to edit metadata of their own items (after submission was approved) is working | 1.0 | Submitters can't edit metadata - Neither the collection wide nor selective setting allowing submitters to edit metadata of their own items (after submission was approved) is working | priority | submitters can t edit metadata neither the collection wide nor selective setting allowing submitters to edit metadata of their own items after submission was approved is working | 1 |
155,260 | 5,951,538,539 | IssuesEvent | 2017-05-26 19:49:31 | atlassian/localstack | https://api.github.com/repos/atlassian/localstack | closed | Sns publishing to lambda returns unexpected protocol "lambda" | bug integration-bug priority-high | Hi all, I'm basically setting up lambda to receive topic pushes from sns. However, it seems like the sns listener doesn't know what the protocol "lambda" is. Any idea why this isn't working or is this feature not yet implemented?
What I did:
1. aws sns create-topic ...
2. aws lambda create-function ...
3. aws sns subscribe --protocol lambda ...
4. aws sns publish ...
and I get this error:
localstack_1 | WARNING:localstack.mock.proxy.sns_listener:Unexpected protocol "lambda" for SNS subscription | 1.0 | Sns publishing to lambda returns unexpected protocol "lambda" - Hi all, I'm basically setting up lambda to receive topic pushes from sns. However, it seems like the sns listener doesn't know what the protocol "lambda" is. Any idea why this isn't working or is this feature not yet implemented?
What I did:
1. aws sns create-topic ...
2. aws lambda create-function ...
3. aws sns subscribe --protocol lambda ...
4. aws sns publish ...
and I get this error:
localstack_1 | WARNING:localstack.mock.proxy.sns_listener:Unexpected protocol "lambda" for SNS subscription | priority | sns publishing to lambda returns unexpected protocol lambda hi all i m basically setting up lambda to receive topic pushes from sns however it seems like the sns listener doesn t know what the protocol lambda is any idea why this isn t working or is this feature not yet implemented what i did aws sns create topic aws lambda create function aws sns subscribe protocol lambda aws sns publish and i get this error localstack warning localstack mock proxy sns listener unexpected protocol lambda for sns subscription | 1 |
405,428 | 11,873,120,850 | IssuesEvent | 2020-03-26 16:49:01 | netdata/netdata | https://api.github.com/repos/netdata/netdata | closed | During the agent installation, if the ACLK fails to be built, show an error message to the user | ACLK internal priority/high priority/medium | #### Summary
Ensure that the requirement in product#282 are met.
- [x] If ACLK error reporting not set yet by #8051 define it with this issue
- [x] If ACLK build fails make it prominent to the user so he knows about it. It should not be just small line easily overlooked in log. @amoss will speak to @jacekkolasa about this separately.
- [x] Netdata should log on startup it is build without ACLK
- [x] Report failure to Cloud to same endpoint #8051
- [x] Respect DO_NOT_TRACK environment variable
(Some of this will be covered by PR 8025 but there are fresh requests at the bottom of the discussion).
| 2.0 | During the agent installation, if the ACLK fails to be built, show an error message to the user - #### Summary
Ensure that the requirement in product#282 are met.
- [x] If ACLK error reporting not set yet by #8051 define it with this issue
- [x] If ACLK build fails make it prominent to the user so he knows about it. It should not be just small line easily overlooked in log. @amoss will speak to @jacekkolasa about this separately.
- [x] Netdata should log on startup it is build without ACLK
- [x] Report failure to Cloud to same endpoint #8051
- [x] Respect DO_NOT_TRACK environment variable
(Some of this will be covered by PR 8025 but there are fresh requests at the bottom of the discussion).
| priority | during the agent installation if the aclk fails to be built show an error message to the user summary ensure that the requirement in product are met if aclk error reporting not set yet by define it with this issue if aclk build fails make it prominent to the user so he knows about it it should not be just small line easily overlooked in log amoss will speak to jacekkolasa about this separately netdata should log on startup it is build without aclk report failure to cloud to same endpoint respect do not track environment variable some of this will be covered by pr but there are fresh requests at the bottom of the discussion | 1 |
646,491 | 21,049,537,387 | IssuesEvent | 2022-03-31 19:16:34 | AY2122S2-CS2103T-W09-3/tp | https://api.github.com/repos/AY2122S2-CS2103T-W09-3/tp | closed | Add responsiveness of budget to different commands... | type.Epic type.Story type.Task priority.High | `AddCommand` -> Budget decreases
`DeleteCommand` -> Budget increases
`ClearCommand` -> Budget reset to undefined
`EditCommand` -> Budget adjusted accordingly _(if amount adjusted to be more, then budget decrease)_
Different ways of implementation, but ultimately must be able to see
- Monthly budget set
- Remaining budget
- Total expense amount (optional) | 1.0 | Add responsiveness of budget to different commands... - `AddCommand` -> Budget decreases
`DeleteCommand` -> Budget increases
`ClearCommand` -> Budget reset to undefined
`EditCommand` -> Budget adjusted accordingly _(if amount adjusted to be more, then budget decrease)_
Different ways of implementation, but ultimately must be able to see
- Monthly budget set
- Remaining budget
- Total expense amount (optional) | priority | add responsiveness of budget to different commands addcommand budget decreases deletecommand budget increases clearcommand budget reset to undefined editcommand budget adjusted accordingly if amount adjusted to be more then budget decrease different ways of implementation but ultimately must be able to see monthly budget set remaining budget total expense amount optional | 1 |
439,240 | 12,679,734,227 | IssuesEvent | 2020-06-19 12:24:22 | pytorch/ignite | https://api.github.com/repos/pytorch/ignite | opened | Add migration notes : v0.3.0 -> v0.4.0 | enhancement high-priority |
Idea is to add a small descriptive note (a markdown or a wiki page) with examples of how to migrate v0.3.0 code to v0.4.0 while keeping a similar behaviour:
- Engine and random seed, DeterministicEngine
- Metrics and device
- ...
| 1.0 | Add migration notes : v0.3.0 -> v0.4.0 -
Idea is to add a small descriptive note (a markdown or a wiki page) with examples of how to migrate v0.3.0 code to v0.4.0 while keeping a similar behaviour:
- Engine and random seed, DeterministicEngine
- Metrics and device
- ...
| priority | add migration notes idea is to add a small descriptive note a markdown or a wiki page with examples of how to migrate code to while keeping a similar behaviour engine and random seed deterministicengine metrics and device | 1 |
661,286 | 22,046,372,099 | IssuesEvent | 2022-05-30 02:30:29 | kubesphere/console | https://api.github.com/repos/kubesphere/console | closed | Devops version management error | area/devops kind/bug kind/need-to-verify priority/high | **Describe the bug**
There is a multi-cluster environment, `host ks: v3.3.0-alpha2`, `member ks:v3.2.1` and both enabled devops.
**Expected behavior**
The features of v3.3.0 are not displayed in the devops project created in member cluster.
**Actual behavior**
The features of v3.3.0 are displayed in the devops project created in member cluster. but did not work.

**Versions used(KubeSphere/Kubernetes)**
host ks: v3.3.0-alpha.2
member ks: 3.2.1
/priority high
/assign @kubesphere/sig-console @kubesphere/sig-devops | 1.0 | Devops version management error - **Describe the bug**
There is a multi-cluster environment, `host ks: v3.3.0-alpha2`, `member ks:v3.2.1` and both enabled devops.
**Expected behavior**
The features of v3.3.0 are not displayed in the devops project created in member cluster.
**Actual behavior**
The features of v3.3.0 are displayed in the devops project created in member cluster. but did not work.

**Versions used(KubeSphere/Kubernetes)**
host ks: v3.3.0-alpha.2
member ks: 3.2.1
/priority high
/assign @kubesphere/sig-console @kubesphere/sig-devops | priority | devops version management error describe the bug there is a multi cluster environment host ks member ks and both enabled devops expected behavior the features of are not displayed in the devops project created in member cluster actual behavior the features of are displayed in the devops project created in member cluster but did not work versions used kubesphere kubernetes host ks alpha member ks priority high assign kubesphere sig console kubesphere sig devops | 1 |
205,824 | 7,106,469,229 | IssuesEvent | 2018-01-16 16:43:16 | Signbank/NGT-signbank | https://api.github.com/repos/Signbank/NGT-signbank | closed | Allow for importing of new glosses by the user | enhancement high priority migration | Create an import function somewhere where the user can upload a CSV file that contains ID Gloss, Annotation ID Gloss, and Annotation ID Gloss English in three columns, creating new signs on the basis of those three values. Give feedback about which glosses were correctly created and which glosses already existed (on the basis of Annotation ID Gloss and/or Annotation ID Gloss English).
| 1.0 | Allow for importing of new glosses by the user - Create an import function somewhere where the user can upload a CSV file that contains ID Gloss, Annotation ID Gloss, and Annotation ID Gloss English in three columns, creating new signs on the basis of those three values. Give feedback about which glosses were correctly created and which glosses already existed (on the basis of Annotation ID Gloss and/or Annotation ID Gloss English).
| priority | allow for importing of new glosses by the user create an import function somewhere where the user can upload a csv file that contains id gloss annotation id gloss and annotation id gloss english in three columns creating new signs on the basis of those three values give feedback about which glosses were correctly created and which glosses already existed on the basis of annotation id gloss and or annotation id gloss english | 1 |
397,415 | 11,728,100,591 | IssuesEvent | 2020-03-10 16:59:21 | unfoldingWord/translationCore | https://api.github.com/repos/unfoldingWord/translationCore | closed | Verse edits are not persisted when performed in the scripture pane of tN or tW and not on the verse in the selected check | Kind/Bug Priority/High | 2.2.0 (69c6077)
1. Load a project with no edits in tN or tW
2. Open the expanded scripture pane
3. Edit a verse that is not the one selected in the check
4. Select a reason code and click Save
5. Note that the edit is not persisted in the expanded scripture pane
6. Navigate to a check that includes the edited verse
7. Note that the edit icon is displayed on the check, but the edit is missing | 1.0 | Verse edits are not persisted when performed in the scripture pane of tN or tW and not on the verse in the selected check - 2.2.0 (69c6077)
1. Load a project with no edits in tN or tW
2. Open the expanded scripture pane
3. Edit a verse that is not the one selected in the check
4. Select a reason code and click Save
5. Note that the edit is not persisted in the expanded scripture pane
6. Navigate to a check that includes the edited verse
7. Note that the edit icon is displayed on the check, but the edit is missing | priority | verse edits are not persisted when performed in the scripture pane of tn or tw and not on the verse in the selected check load a project with no edits in tn or tw open the expanded scripture pane edit a verse that is not the one selected in the check select a reason code and click save note that the edit is not persisted in the expanded scripture pane navigate to a check that includes the edited verse note that the edit icon is displayed on the check but the edit is missing | 1 |
103,926 | 4,187,443,994 | IssuesEvent | 2016-06-23 17:30:45 | chocolatey/choco | https://api.github.com/repos/chocolatey/choco | closed | Successful installer exit codes not recognized by choco should return 0 | 3 - Done Bug Priority_HIGH | https://chocolatey.org/packages/qbittorrent#comment-2743141882
Discussion at https://gitter.im/chocolatey/choco?at=57699b590ede04dc49036f43
| 1.0 | Successful installer exit codes not recognized by choco should return 0 - https://chocolatey.org/packages/qbittorrent#comment-2743141882
Discussion at https://gitter.im/chocolatey/choco?at=57699b590ede04dc49036f43
| priority | successful installer exit codes not recognized by choco should return discussion at | 1 |
395,355 | 11,684,749,967 | IssuesEvent | 2020-03-05 07:35:27 | AY1920S2-CS2103T-W12-3/main | https://api.github.com/repos/AY1920S2-CS2103T-W12-3/main | closed | As a person with a lot of friends I can keep track of who owes me what on which day | priority.High type.Story | ... so that I can ask them to pay me back | 1.0 | As a person with a lot of friends I can keep track of who owes me what on which day - ... so that I can ask them to pay me back | priority | as a person with a lot of friends i can keep track of who owes me what on which day so that i can ask them to pay me back | 1 |
295,445 | 9,086,555,459 | IssuesEvent | 2019-02-18 11:15:45 | canonical-websites/snapcraft.io | https://api.github.com/repos/canonical-websites/snapcraft.io | closed | Too many OSes listed on metrics tooltip | Priority: High | As per https://forum.snapcraft.io/t/my-package-acestreamplayer-is-missing-from-store/8961, specifically https://yadi.sk/i/w_UQOKhyS6OdUg
I imagine we don't need to show OSes with < 100 users in this example - looks like bad Maths.
Related to https://github.com/canonical-websites/snapcraft.io/issues/1429 | 1.0 | Too many OSes listed on metrics tooltip - As per https://forum.snapcraft.io/t/my-package-acestreamplayer-is-missing-from-store/8961, specifically https://yadi.sk/i/w_UQOKhyS6OdUg
I imagine we don't need to show OSes with < 100 users in this example - looks like bad Maths.
Related to https://github.com/canonical-websites/snapcraft.io/issues/1429 | priority | too many oses listed on metrics tooltip as per specifically i imagine we don t need to show oses with users in this example looks like bad maths related to | 1 |
379,888 | 11,243,323,070 | IssuesEvent | 2020-01-10 02:39:06 | FusionCorps/2020-Green | https://api.github.com/repos/FusionCorps/2020-Green | closed | Add voltage ramp up for chassis motors | feature high-priority subsystem | **Is your feature request related to a problem? Please describe.**
Last year, the robot would brown out often during matches. This is due to the pneumatic wheels that make accelerated movement draw too much power.
**Describe the solution you'd like**
The voltage ramp-up of the Talons needs to be controlled based on other motion characteristics of the robot.
**Describe alternatives you've considered**
A fixed voltage ramp up might work, but limit movement.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | Add voltage ramp up for chassis motors - **Is your feature request related to a problem? Please describe.**
Last year, the robot would brown out often during matches. This is due to the pneumatic wheels that make accelerated movement draw too much power.
**Describe the solution you'd like**
The voltage ramp-up of the Talons needs to be controlled based on other motion characteristics of the robot.
**Describe alternatives you've considered**
A fixed voltage ramp up might work, but limit movement.
**Additional context**
Add any other context or screenshots about the feature request here.
| priority | add voltage ramp up for chassis motors is your feature request related to a problem please describe last year the robot would brown out often during matches this is due to the pneumatic wheels that make accelerated movement draw too much power describe the solution you d like the voltage ramp up of the talons needs to be controlled based on other motion characteristics of the robot describe alternatives you ve considered a fixed voltage ramp up might work but limit movement additional context add any other context or screenshots about the feature request here | 1 |
450,595 | 13,016,965,483 | IssuesEvent | 2020-07-26 09:46:20 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | Fail to edit custom workspace role with a javascript error | area/console kind/bug priority/high | ## English only!
**注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。**
**General remarks**
> Please delete this section including header before submitting
>
> This form is to report bugs. For general usage questions refer to our Slack channel
> [KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTdkNTc3OTdmNzdiODViZjViNTU5ZDY3M2I2MzY4MTI4OGZlOTJmMDg5ZTFiMDAwYzNlZDY5NjA0NzZlNDU5NmY)
**Describe the Bug**
Fail to edit custom workspace role with a javascript error
**Versions Used**
KubeSphere: KubeSphere 3.0 alpha 1
**How To Reproduce**
Steps to reproduce the behavior:
1. Create a custom role with no access granted
2. Re-edit this role, add 'Project View' access, then save it
3. No response, then see a javascript error
<img width="1654" alt="Xnip2020-07-23_15-19-06" src="https://user-images.githubusercontent.com/28883416/88261059-2bfa4880-ccf8-11ea-8963-89d0f0a7f720.png">
| 1.0 | Fail to edit custom workspace role with a javascript error - ## English only!
**注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。**
**General remarks**
> Please delete this section including header before submitting
>
> This form is to report bugs. For general usage questions refer to our Slack channel
> [KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTdkNTc3OTdmNzdiODViZjViNTU5ZDY3M2I2MzY4MTI4OGZlOTJmMDg5ZTFiMDAwYzNlZDY5NjA0NzZlNDU5NmY)
**Describe the Bug**
Fail to edit custom workspace role with a javascript error
**Versions Used**
KubeSphere: KubeSphere 3.0 alpha 1
**How To Reproduce**
Steps to reproduce the behavior:
1. Create a custom role with no access granted
2. Re-edit this role, add 'Project View' access, then save it
3. No response, then see a javascript error
<img width="1654" alt="Xnip2020-07-23_15-19-06" src="https://user-images.githubusercontent.com/28883416/88261059-2bfa4880-ccf8-11ea-8963-89d0f0a7f720.png">
| priority | fail to edit custom workspace role with a javascript error english only 注意!github issue 仅支持英文,中文 issue 请在 提交。 general remarks please delete this section including header before submitting this form is to report bugs for general usage questions refer to our slack channel describe the bug fail to edit custom workspace role with a javascript error versions used kubesphere kubesphere alpha how to reproduce steps to reproduce the behavior create a custom role with no access granted re edit this role add project view access then save it no response then see a javascript error img width alt src | 1 |
46,504 | 2,958,349,477 | IssuesEvent | 2015-07-08 20:56:16 | Ombridride/minetest-minetestforfun-server | https://api.github.com/repos/Ombridride/minetest-minetestforfun-server | closed | Better Mapgen | Modding@Mapgen Priority@High | We planed to change the mapgen of the server, we need to test it in a local dev server, but it will be interresting for deleting highlandpools (too bugged), and with this mapgen we are going to have a very interresting mountains/rivers/etc... generation.
#### Comptability issue
It works with our actual core mapgen (v6) and it's written in Lua. (it's a mod so it overwrited the actual mapgen)
#### Link(s)
https://forum.minetest.net/viewtopic.php?f=11&t=8609&hilit=watershed
#### Screenshot(s)


#### Internal code changes
- [x] Increase the floating islands rate
- [x] Reduce the biome size
- [ ] Tweak the caves generation
#### Nodes wich needs to be converted
- [ ] Node1
- [ ] Node2
- [ ] Node3
- [ ] Node4
- [ ] Node5
- [ ] Node6
- [ ] Node7
- [ ] Node8
- [ ] Node9
- [ ] etc...
#### Temperature unification
- [ ] Biome1
- [ ] Biome2
- [ ] Biome3
- [ ] Biome4
- [ ] Biome5
- [ ] Biome6
#### Humidity unification
- [ ] Biome1
- [ ] Biome2
- [ ] Biome3
- [ ] Biome4
- [ ] Biome5
- [ ] Biome6
#### Other information(s)
If possible, don't tweak another mods | 1.0 | Better Mapgen - We planed to change the mapgen of the server, we need to test it in a local dev server, but it will be interresting for deleting highlandpools (too bugged), and with this mapgen we are going to have a very interresting mountains/rivers/etc... generation.
#### Comptability issue
It works with our actual core mapgen (v6) and it's written in Lua. (it's a mod so it overwrited the actual mapgen)
#### Link(s)
https://forum.minetest.net/viewtopic.php?f=11&t=8609&hilit=watershed
#### Screenshot(s)


#### Internal code changes
- [x] Increase the floating islands rate
- [x] Reduce the biome size
- [ ] Tweak the caves generation
#### Nodes wich needs to be converted
- [ ] Node1
- [ ] Node2
- [ ] Node3
- [ ] Node4
- [ ] Node5
- [ ] Node6
- [ ] Node7
- [ ] Node8
- [ ] Node9
- [ ] etc...
#### Temperature unification
- [ ] Biome1
- [ ] Biome2
- [ ] Biome3
- [ ] Biome4
- [ ] Biome5
- [ ] Biome6
#### Humidity unification
- [ ] Biome1
- [ ] Biome2
- [ ] Biome3
- [ ] Biome4
- [ ] Biome5
- [ ] Biome6
#### Other information(s)
If possible, don't tweak another mods | priority | better mapgen we planed to change the mapgen of the server we need to test it in a local dev server but it will be interresting for deleting highlandpools too bugged and with this mapgen we are going to have a very interresting mountains rivers etc generation comptability issue it works with our actual core mapgen and it s written in lua it s a mod so it overwrited the actual mapgen link s screenshot s internal code changes increase the floating islands rate reduce the biome size tweak the caves generation nodes wich needs to be converted etc temperature unification humidity unification other information s if possible don t tweak another mods | 1 |
445,690 | 12,835,044,123 | IssuesEvent | 2020-07-07 12:12:13 | ballerina-platform/module-ballerinax-mongodb | https://api.github.com/repos/ballerina-platform/module-ballerinax-mongodb | closed | Support client certificates | Priority/High Severity/Major Type/Bug | **Description:**
<!-- Give a brief description of the issue -->
There appears to be no support for using client certificates for connecting to the database. In considering using ballerina this is a deal-breaker for us. If it is indeed supported, please document how it can be done.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
enhancement
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | 1.0 | Support client certificates - **Description:**
<!-- Give a brief description of the issue -->
There appears to be no support for using client certificates for connecting to the database. In considering using ballerina this is a deal-breaker for us. If it is indeed supported, please document how it can be done.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
enhancement
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | priority | support client certificates description there appears to be no support for using client certificates for connecting to the database in considering using ballerina this is a deal breaker for us if it is indeed supported please document how it can be done suggested labels enhancement suggested assignees affected product version os db other environment details and versions steps to reproduce related issues | 1 |
802,825 | 29,047,085,236 | IssuesEvent | 2023-05-13 17:56:53 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | [2.16beta01 Bug]: IndexOutOfBoundsException on DeckAdapter | Priority-High Bug |
```
java.lang.IndexOutOfBoundsException: Inconsistency detected. Invalid item position 8(offset:8).state:29 androidx.recyclerview.widget.RecyclerView{b2ffdd6 VFED.V... ......ID 0,0-1440,2673 #7f0901d1 app:id/files}, adapter:com.ichi2.anki.widgets.DeckAdapter@1000a4d, layout:androidx.recyclerview.widget.LinearLayoutManager@ae0eb86, context:com.ichi2.anki.DeckPicker@c21356d
at androidx.recyclerview.widget.RecyclerView$Recycler.tryGetViewHolderForPositionByDeadline(RecyclerView.java:382)
at androidx.recyclerview.widget.RecyclerView$Recycler.getViewForPosition(RecyclerView.java:2)
at androidx.recyclerview.widget.RecyclerView$Recycler.getViewForPosition(RecyclerView.java:1)
at androidx.recyclerview.widget.LinearLayoutManager$LayoutState.next(LinearLayoutManager.java:12)
at androidx.recyclerview.widget.LinearLayoutManager.layoutChunk(LinearLayoutManager.java:1)
at androidx.recyclerview.widget.LinearLayoutManager.fill(LinearLayoutManager.java:39)
at androidx.recyclerview.widget.LinearLayoutManager.scrollBy(LinearLayoutManager.java:35)
at androidx.recyclerview.widget.LinearLayoutManager.scrollVerticallyBy(LinearLayoutManager.java:7)
at androidx.recyclerview.widget.RecyclerView.scrollStep(RecyclerView.java:40)
at androidx.recyclerview.widget.RecyclerView$ViewFlinger.run(RecyclerView.java:116)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:1301)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:1309)
at android.view.Choreographer.doCallbacks(Choreographer.java:923)
at android.view.Choreographer.doFrame(Choreographer.java:847)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:1283)
at android.os.Handler.handleCallback(Handler.java:942)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:226)
at android.os.Looper.loop(Looper.java:313)
at android.app.ActivityThread.main(ActivityThread.java:8757)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:571)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1067)
```
https://ankidroid.org/acra/app/1/bug/47707/report/3482ece4-f673-4242-9af7-5fdeb6368cf3
Android 13
Samsung phone
@david-allison said on Discord:
> * DeckAdapter issue (I suspect on Samsung phones): https://ankidroid.org/acra/app/1/bug/47707/report/3482ece4-f673-4242-9af7-5fdeb6368cf3
* Likely a very simple fix, judging from StackOverflow | 1.0 | [2.16beta01 Bug]: IndexOutOfBoundsException on DeckAdapter -
```
java.lang.IndexOutOfBoundsException: Inconsistency detected. Invalid item position 8(offset:8).state:29 androidx.recyclerview.widget.RecyclerView{b2ffdd6 VFED.V... ......ID 0,0-1440,2673 #7f0901d1 app:id/files}, adapter:com.ichi2.anki.widgets.DeckAdapter@1000a4d, layout:androidx.recyclerview.widget.LinearLayoutManager@ae0eb86, context:com.ichi2.anki.DeckPicker@c21356d
at androidx.recyclerview.widget.RecyclerView$Recycler.tryGetViewHolderForPositionByDeadline(RecyclerView.java:382)
at androidx.recyclerview.widget.RecyclerView$Recycler.getViewForPosition(RecyclerView.java:2)
at androidx.recyclerview.widget.RecyclerView$Recycler.getViewForPosition(RecyclerView.java:1)
at androidx.recyclerview.widget.LinearLayoutManager$LayoutState.next(LinearLayoutManager.java:12)
at androidx.recyclerview.widget.LinearLayoutManager.layoutChunk(LinearLayoutManager.java:1)
at androidx.recyclerview.widget.LinearLayoutManager.fill(LinearLayoutManager.java:39)
at androidx.recyclerview.widget.LinearLayoutManager.scrollBy(LinearLayoutManager.java:35)
at androidx.recyclerview.widget.LinearLayoutManager.scrollVerticallyBy(LinearLayoutManager.java:7)
at androidx.recyclerview.widget.RecyclerView.scrollStep(RecyclerView.java:40)
at androidx.recyclerview.widget.RecyclerView$ViewFlinger.run(RecyclerView.java:116)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:1301)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:1309)
at android.view.Choreographer.doCallbacks(Choreographer.java:923)
at android.view.Choreographer.doFrame(Choreographer.java:847)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:1283)
at android.os.Handler.handleCallback(Handler.java:942)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:226)
at android.os.Looper.loop(Looper.java:313)
at android.app.ActivityThread.main(ActivityThread.java:8757)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:571)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1067)
```
https://ankidroid.org/acra/app/1/bug/47707/report/3482ece4-f673-4242-9af7-5fdeb6368cf3
Android 13
Samsung phone
@david-allison said on Discord:
> * DeckAdapter issue (I suspect on Samsung phones): https://ankidroid.org/acra/app/1/bug/47707/report/3482ece4-f673-4242-9af7-5fdeb6368cf3
* Likely a very simple fix, judging from StackOverflow | priority | indexoutofboundsexception on deckadapter java lang indexoutofboundsexception inconsistency detected invalid item position offset state androidx recyclerview widget recyclerview vfed v id app id files adapter com anki widgets deckadapter layout androidx recyclerview widget linearlayoutmanager context com anki deckpicker at androidx recyclerview widget recyclerview recycler trygetviewholderforpositionbydeadline recyclerview java at androidx recyclerview widget recyclerview recycler getviewforposition recyclerview java at androidx recyclerview widget recyclerview recycler getviewforposition recyclerview java at androidx recyclerview widget linearlayoutmanager layoutstate next linearlayoutmanager java at androidx recyclerview widget linearlayoutmanager layoutchunk linearlayoutmanager java at androidx recyclerview widget linearlayoutmanager fill linearlayoutmanager java at androidx recyclerview widget linearlayoutmanager scrollby linearlayoutmanager java at androidx recyclerview widget linearlayoutmanager scrollverticallyby linearlayoutmanager java at androidx recyclerview widget recyclerview scrollstep recyclerview java at androidx recyclerview widget recyclerview viewflinger run recyclerview java at android view choreographer callbackrecord run choreographer java at android view choreographer callbackrecord run choreographer java at android view choreographer docallbacks choreographer java at android view choreographer doframe choreographer java at android view choreographer framedisplayeventreceiver run choreographer java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper looponce looper java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java android samsung phone david allison said on discord deckadapter issue i suspect on samsung phones likely a very simple fix judging from stackoverflow | 1 |
66,064 | 3,249,833,608 | IssuesEvent | 2015-10-18 13:54:09 | mattmezza/socialize | https://api.github.com/repos/mattmezza/socialize | opened | GET /feeds/:per_page/:page | API high-priority | Retrieves the list of maximum `:per_page` elements of the most recent posts published by followed and friend users. The `:page` url param stands for the page you want to get.
Example:
*GET* `/feeds/20/2` gets the second page of the 20 elements pagination.
*GET* `/feeds/50/1` gets the first page of the 50 elements pagination. | 1.0 | GET /feeds/:per_page/:page - Retrieves the list of maximum `:per_page` elements of the most recent posts published by followed and friend users. The `:page` url param stands for the page you want to get.
Example:
*GET* `/feeds/20/2` gets the second page of the 20 elements pagination.
*GET* `/feeds/50/1` gets the first page of the 50 elements pagination. | priority | get feeds per page page retrieves the list of maximum per page elements of the most recent posts published by followed and friend users the page url param stands for the page you want to get example get feeds gets the second page of the elements pagination get feeds gets the first page of the elements pagination | 1 |
443,775 | 12,799,451,430 | IssuesEvent | 2020-07-02 15:24:59 | CESNET/perun-web-apps | https://api.github.com/repos/CESNET/perun-web-apps | opened | Bugfixes | Very High Priority bug development task | 1. In OpenApi there is wrong definition of `getBansForFacility`. The parameter for this method is "facilityId" not "facility"
2. in facility management - hosts there is bug where error is written into the console. FIx that bug.
3. in perun admin: attributes - attribute detail dialog. In that dialog change the cancel button into mat-flat-button.
4. set some margin(ml-1) to every column of every table in whole project. Try it the way where you modify only only the styles.css where you set for every column of application. | 1.0 | Bugfixes - 1. In OpenApi there is wrong definition of `getBansForFacility`. The parameter for this method is "facilityId" not "facility"
2. in facility management - hosts there is bug where error is written into the console. FIx that bug.
3. in perun admin: attributes - attribute detail dialog. In that dialog change the cancel button into mat-flat-button.
4. set some margin(ml-1) to every column of every table in whole project. Try it the way where you modify only only the styles.css where you set for every column of application. | priority | bugfixes in openapi there is wrong definition of getbansforfacility the parameter for this method is facilityid not facility in facility management hosts there is bug where error is written into the console fix that bug in perun admin attributes attribute detail dialog in that dialog change the cancel button into mat flat button set some margin ml to every column of every table in whole project try it the way where you modify only only the styles css where you set for every column of application | 1 |
526,599 | 15,296,646,805 | IssuesEvent | 2021-02-24 07:12:16 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | Source Appstore: Invalid vendor ID error | area/integration priority/high type/bug | ## Expected Behavior
When I create credentials for the Appstore source connector according to the [documentation](docs.airbyte.io/integrations/sources/appstore), I expect that it will work as expected and that I can sync all streams.
## Current Behavior
* Check connection fails with a "invalid vendor ID" error message
* Pulling Sales reports succeeds
* Pulling any other report fails
## Steps to Reproduce
1. Follow [the documentation](https://docs.airbyte.io/integrations/sources/appstore) to create an app key with the Finance role
1. create a config.json matching the appstore spec and store it in some directory e.g `/tmp/config.json`
2. Navigate to the appstore connector directory and activate the virtual environment: `cd airbyte-integrations/connectors/source-appstore-singer && python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt && pip install .[main]`
1. `python main_dev.py check --config /ab_tmp/config.json`
2. The check will fail with an error saying the vendor is invalid
3. change the code in `source_appstore_singer/source.py`'s `check_config` method to make the `reportType: "SALES"` and `version: 1_0` and run the check again, it will succeed
## Severity of the bug for you
High - blocking a user from using airbyte
## Airbyte Version
This is happening on all published versions of the connector
| 1.0 | Source Appstore: Invalid vendor ID error - ## Expected Behavior
When I create credentials for the Appstore source connector according to the [documentation](docs.airbyte.io/integrations/sources/appstore), I expect that it will work as expected and that I can sync all streams.
## Current Behavior
* Check connection fails with a "invalid vendor ID" error message
* Pulling Sales reports succeeds
* Pulling any other report fails
## Steps to Reproduce
1. Follow [the documentation](https://docs.airbyte.io/integrations/sources/appstore) to create an app key with the Finance role
1. create a config.json matching the appstore spec and store it in some directory e.g `/tmp/config.json`
2. Navigate to the appstore connector directory and activate the virtual environment: `cd airbyte-integrations/connectors/source-appstore-singer && python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt && pip install .[main]`
1. `python main_dev.py check --config /ab_tmp/config.json`
2. The check will fail with an error saying the vendor is invalid
3. change the code in `source_appstore_singer/source.py`'s `check_config` method to make the `reportType: "SALES"` and `version: 1_0` and run the check again, it will succeed
## Severity of the bug for you
High - blocking a user from using airbyte
## Airbyte Version
This is happening on all published versions of the connector
| priority | source appstore invalid vendor id error expected behavior when i create credentials for the appstore source connector according to the docs airbyte io integrations sources appstore i expect that it will work as expected and that i can sync all streams current behavior check connection fails with a invalid vendor id error message pulling sales reports succeeds pulling any other report fails steps to reproduce follow to create an app key with the finance role create a config json matching the appstore spec and store it in some directory e g tmp config json navigate to the appstore connector directory and activate the virtual environment cd airbyte integrations connectors source appstore singer python m venv venv source venv bin activate pip install r requirements txt pip install python main dev py check config ab tmp config json the check will fail with an error saying the vendor is invalid change the code in source appstore singer source py s check config method to make the reporttype sales and version and run the check again it will succeed severity of the bug for you high blocking a user from using airbyte airbyte version this is happening on all published versions of the connector | 1 |
597,067 | 18,154,007,874 | IssuesEvent | 2021-09-26 19:08:14 | transport-nantes/tn_web | https://api.github.com/repos/transport-nantes/tn_web | opened | TBv2 unit test improvements | 1-priority high | The TBv2 functions should be better tested.
Some comments on the existing tests in `transport_nantes/topicblog/tests.py`. The main issue is that you document your code as though your readers have all of your mental state. They don't. You won't, either, when you come back to this in a few months to fix something.
You can get away with scant documentation (sometimes) for obvious code with short tests and really clear function names. Remove any of those properties and you need to document way better.
The test names don't give a good feel for what you expect. For example, `test_item_with_slug_edit_status_code` tells the reader that you are testing something about items and slugs and status codes, but what, exactly? Given that the function is almost 100 lines, this is not even clear from glancing at the code. Partly this is just parsing of snake-case names. You could get around it with a better function comment, but your one-liner is woefully inadequate to the task. Explain the context, what you expect, and what you are testing for. The reader hasn't just spent days implementing the code and reading design docs. You need to provide that here.
Another example (a bit at random), is at line 228, "View with wrong slug and correct id". Great, your viewing with wrong slug and correct id. Why is this important? When do you expect this to happen? What should happen?
Given that this file is already 421 lines long, it might be reasonable to split it into more than one file. At the very least, a file for the publicly accessible view function and a file for the restricted admin functions.
You might want to review your naming somewhat as well. For example, you often say what something isn't, but that doesn't say what it is. An example is the function/path named `edit_item_no_slug`. Calling it `edit_item_by_pkid` is clear. What happens when a third way of editing the item pops around? Negatives are rarely good for names.
| 1.0 | TBv2 unit test improvements - The TBv2 functions should be better tested.
Some comments on the existing tests in `transport_nantes/topicblog/tests.py`. The main issue is that you document your code as though your readers have all of your mental state. They don't. You won't, either, when you come back to this in a few months to fix something.
You can get away with scant documentation (sometimes) for obvious code with short tests and really clear function names. Remove any of those properties and you need to document way better.
The test names don't give a good feel for what you expect. For example, `test_item_with_slug_edit_status_code` tells the reader that you are testing something about items and slugs and status codes, but what, exactly? Given that the function is almost 100 lines, this is not even clear from glancing at the code. Partly this is just parsing of snake-case names. You could get around it with a better function comment, but your one-liner is woefully inadequate to the task. Explain the context, what you expect, and what you are testing for. The reader hasn't just spent days implementing the code and reading design docs. You need to provide that here.
Another example (a bit at random), is at line 228, "View with wrong slug and correct id". Great, your viewing with wrong slug and correct id. Why is this important? When do you expect this to happen? What should happen?
Given that this file is already 421 lines long, it might be reasonable to split it into more than one file. At the very least, a file for the publicly accessible view function and a file for the restricted admin functions.
You might want to review your naming somewhat as well. For example, you often say what something isn't, but that doesn't say what it is. An example is the function/path named `edit_item_no_slug`. Calling it `edit_item_by_pkid` is clear. What happens when a third way of editing the item pops around? Negatives are rarely good for names.
| priority | unit test improvements the functions should be better tested some comments on the existing tests in transport nantes topicblog tests py the main issue is that you document your code as though your readers have all of your mental state they don t you won t either when you come back to this in a few months to fix something you can get away with scant documentation sometimes for obvious code with short tests and really clear function names remove any of those properties and you need to document way better the test names don t give a good feel for what you expect for example test item with slug edit status code tells the reader that you are testing something about items and slugs and status codes but what exactly given that the function is almost lines this is not even clear from glancing at the code partly this is just parsing of snake case names you could get around it with a better function comment but your one liner is woefully inadequate to the task explain the context what you expect and what you are testing for the reader hasn t just spent days implementing the code and reading design docs you need to provide that here another example a bit at random is at line view with wrong slug and correct id great your viewing with wrong slug and correct id why is this important when do you expect this to happen what should happen given that this file is already lines long it might be reasonable to split it into more than one file at the very least a file for the publicly accessible view function and a file for the restricted admin functions you might want to review your naming somewhat as well for example you often say what something isn t but that doesn t say what it is an example is the function path named edit item no slug calling it edit item by pkid is clear what happens when a third way of editing the item pops around negatives are rarely good for names | 1 |
239,132 | 7,787,011,462 | IssuesEvent | 2018-06-06 20:52:21 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Failing test: Error: Uncaught Error: Parent origin mismatch: xyz , abc | P1: High Priority | ```
DESCRIBE => amp-ad-3p-impl
IT => "before each" hook
✗ Error: Uncaught Error: Parent origin mismatch: xyz , abc
The test "3p integration.js should throw in validateParentOrigin with incorrect ancestorOrigins" resulted in a call to console.error. (See above line.)
⤷ If the error is not expected, fix the code that generated the error.
⤷ If the error is expected (and synchronous), use the following pattern to wrap the test code that generated the error:
'allowConsoleError(() => { <code that generated the error> });'
⤷ If the error is expected (and asynchronous), use the following pattern at the top of the test:
'expectAsyncConsoleError(<error text>);' (/tmp/src/error.js:211:6 <- /tmp/11af27809c816846d2190a8c8ab87115.browserify:61774)
●
DESCRIBE =>
IT => "before each" hook for "should create iframe and pass data via URL fragment"
✗ TypeError: Cannot read property 'document' of undefined
at createAmpAd (/tmp/extensions/amp-ad/0.1/test/test-amp-ad-3p-impl.js:32:55 <- /tmp/11af27809c816846d2190a8c8ab87115.browserify:22606:64)
at Context.<anonymous> (/tmp/extensions/amp-ad/0.1/test/test-amp-ad-3p-impl.js:71:11 <- /tmp/11af27809c816846d2190a8c8ab87115.browserify:22661:12)
``` | 1.0 | Failing test: Error: Uncaught Error: Parent origin mismatch: xyz , abc - ```
DESCRIBE => amp-ad-3p-impl
IT => "before each" hook
✗ Error: Uncaught Error: Parent origin mismatch: xyz , abc
The test "3p integration.js should throw in validateParentOrigin with incorrect ancestorOrigins" resulted in a call to console.error. (See above line.)
⤷ If the error is not expected, fix the code that generated the error.
⤷ If the error is expected (and synchronous), use the following pattern to wrap the test code that generated the error:
'allowConsoleError(() => { <code that generated the error> });'
⤷ If the error is expected (and asynchronous), use the following pattern at the top of the test:
'expectAsyncConsoleError(<error text>);' (/tmp/src/error.js:211:6 <- /tmp/11af27809c816846d2190a8c8ab87115.browserify:61774)
●
DESCRIBE =>
IT => "before each" hook for "should create iframe and pass data via URL fragment"
✗ TypeError: Cannot read property 'document' of undefined
at createAmpAd (/tmp/extensions/amp-ad/0.1/test/test-amp-ad-3p-impl.js:32:55 <- /tmp/11af27809c816846d2190a8c8ab87115.browserify:22606:64)
at Context.<anonymous> (/tmp/extensions/amp-ad/0.1/test/test-amp-ad-3p-impl.js:71:11 <- /tmp/11af27809c816846d2190a8c8ab87115.browserify:22661:12)
``` | priority | failing test error uncaught error parent origin mismatch xyz abc describe amp ad impl it before each hook ✗ error uncaught error parent origin mismatch xyz abc the test integration js should throw in validateparentorigin with incorrect ancestororigins resulted in a call to console error see above line ⤷ if the error is not expected fix the code that generated the error ⤷ if the error is expected and synchronous use the following pattern to wrap the test code that generated the error allowconsoleerror ⤷ if the error is expected and asynchronous use the following pattern at the top of the test expectasyncconsoleerror tmp src error js tmp browserify ● describe it before each hook for should create iframe and pass data via url fragment ✗ typeerror cannot read property document of undefined at createampad tmp extensions amp ad test test amp ad impl js tmp browserify at context tmp extensions amp ad test test amp ad impl js tmp browserify | 1 |
540,015 | 15,798,385,190 | IssuesEvent | 2021-04-02 18:35:34 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | [Streaming API] [WebSub] Unable to Publish WebSub API from 'Lifecycle' page | API-M 4.0.0 Feature/AsyncAPIs Priority/High Type/Bug | ### Description:
Endpoints are not provided for WebSub Streaming APIs. When trying to publish a created WebSub API from the 'Lifecycle' page, the "Endpoint Provided" requirement is not satisfied. This check shouldn't be performed for WebSub APIs.
<img width="1546" alt="Screen Shot 2021-03-04 at 6 05 53 PM" src="https://user-images.githubusercontent.com/24828296/109966505-598b8380-7d16-11eb-9304-a24c6927eff3.png">
### Steps to reproduce:
1. Create a WebSub API.
2. Attach a business plan to it.
3. Go to the **Lifecycle** page. The **Publish** button will be disabled. | 1.0 | [Streaming API] [WebSub] Unable to Publish WebSub API from 'Lifecycle' page - ### Description:
Endpoints are not provided for WebSub Streaming APIs. When trying to publish a created WebSub API from the 'Lifecycle' page, the "Endpoint Provided" requirement is not satisfied. This check shouldn't be performed for WebSub APIs.
<img width="1546" alt="Screen Shot 2021-03-04 at 6 05 53 PM" src="https://user-images.githubusercontent.com/24828296/109966505-598b8380-7d16-11eb-9304-a24c6927eff3.png">
### Steps to reproduce:
1. Create a WebSub API.
2. Attach a business plan to it.
3. Go to the **Lifecycle** page. The **Publish** button will be disabled. | priority | unable to publish websub api from lifecycle page description endpoints are not provided for websub streaming apis when trying to publish a created websub api from the lifecycle page the endpoint provided requirement is not satisfied this check shouldn t be performed for websub apis img width alt screen shot at pm src steps to reproduce create a websub api attach a business plan to it go to the lifecycle page the publish button will be disabled | 1 |
739,541 | 25,601,472,688 | IssuesEvent | 2022-12-01 20:37:34 | E3SM-Project/scorpio | https://api.github.com/repos/E3SM-Project/scorpio | opened | PIO_IOTYPE_NETCDF4P requires NC_NODIMSCALE_ATTACH option | enhancement High Priority Next Release | With NetCDF 4.8.1 or later versions, some E3SM cases run with PIO_IOTYPE_NETCDF4P might encounter HDF5 errors from nc_enddef() when creating an HDF5-based NetCDF4 file.
The two NetCDF issues below have simple NETCDF4 test programs to reproduce the HDF5 errors:
https://github.com/Unidata/netcdf-c/issues/2165
https://github.com/Unidata/netcdf-c/issues/2251
The location returns the error is at netcdf-c/libhdf5/nc4hdf.c, where the High-level DS API H5DSattach_scale is called multiple times
inside a loop:
```
if (H5DSattach_scale(hdf5_var->hdf_datasetid, dsid, d) < 0)
return NC_EHDFERR;
```
According to HDF5 developers, HDF5 does not test any of the HL APIs like H5DSattach_scale in a parallel setting. At some point, with enough iterations of the loop, HDF5 might get out of step between the ranks, see https://github.com/Unidata/netcdf-c/issues/1822
NetCDF 4.9.0 introduced NC_NODIMSCALE_ATTACH to make dimscale attachment to variables optional, see https://github.com/Unidata/netcdf-c/pull/2161
As a workaround, we can apply this new NetCDF option to PIO_IOTYPE_NETCDF4P to avoid calling H5DSattach_scale. | 1.0 | PIO_IOTYPE_NETCDF4P requires NC_NODIMSCALE_ATTACH option - With NetCDF 4.8.1 or later versions, some E3SM cases run with PIO_IOTYPE_NETCDF4P might encounter HDF5 errors from nc_enddef() when creating an HDF5-based NetCDF4 file.
The two NetCDF issues below have simple NETCDF4 test programs to reproduce the HDF5 errors:
https://github.com/Unidata/netcdf-c/issues/2165
https://github.com/Unidata/netcdf-c/issues/2251
The location returns the error is at netcdf-c/libhdf5/nc4hdf.c, where the High-level DS API H5DSattach_scale is called multiple times
inside a loop:
```
if (H5DSattach_scale(hdf5_var->hdf_datasetid, dsid, d) < 0)
return NC_EHDFERR;
```
According to HDF5 developers, HDF5 does not test any of the HL APIs like H5DSattach_scale in a parallel setting. At some point, with enough iterations of the loop, HDF5 might get out of step between the ranks, see https://github.com/Unidata/netcdf-c/issues/1822
NetCDF 4.9.0 introduced NC_NODIMSCALE_ATTACH to make dimscale attachment to variables optional, see https://github.com/Unidata/netcdf-c/pull/2161
As a workaround, we can apply this new NetCDF option to PIO_IOTYPE_NETCDF4P to avoid calling H5DSattach_scale. | priority | pio iotype requires nc nodimscale attach option with netcdf or later versions some cases run with pio iotype might encounter errors from nc enddef when creating an based file the two netcdf issues below have simple test programs to reproduce the errors the location returns the error is at netcdf c c where the high level ds api scale is called multiple times inside a loop if scale var hdf datasetid dsid d return nc ehdferr according to developers does not test any of the hl apis like scale in a parallel setting at some point with enough iterations of the loop might get out of step between the ranks see netcdf introduced nc nodimscale attach to make dimscale attachment to variables optional see as a workaround we can apply this new netcdf option to pio iotype to avoid calling scale | 1 |
607,453 | 18,782,758,527 | IssuesEvent | 2021-11-08 08:57:46 | betagouv/service-national-universel | https://api.github.com/repos/betagouv/service-national-universel | opened | fix: bouton "modifier le profil" depuis onglet "inscription" | enhancement priority-HIGH | ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
ajouter le bouton "modifier profil" dans le menu déroulant des actions possibles (ainsi que le raccourci en haut du panel de droite) dans l'onglet inscription
### Commentaires
_No response_ | 1.0 | fix: bouton "modifier le profil" depuis onglet "inscription" - ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
ajouter le bouton "modifier profil" dans le menu déroulant des actions possibles (ainsi que le raccourci en haut du panel de droite) dans l'onglet inscription
### Commentaires
_No response_ | priority | fix bouton modifier le profil depuis onglet inscription fonctionnalité liée à un problème no response fonctionnalité ajouter le bouton modifier profil dans le menu déroulant des actions possibles ainsi que le raccourci en haut du panel de droite dans l onglet inscription commentaires no response | 1 |
770,656 | 27,049,569,632 | IssuesEvent | 2023-02-13 12:17:51 | SuadHus/D0020E-VR | https://api.github.com/repos/SuadHus/D0020E-VR | closed | Make the movement compatible with VR | High priority High Risk | Take the movement that we implemented for the player and make it work for the VR. Estimated time: 15 hours | 1.0 | Make the movement compatible with VR - Take the movement that we implemented for the player and make it work for the VR. Estimated time: 15 hours | priority | make the movement compatible with vr take the movement that we implemented for the player and make it work for the vr estimated time hours | 1 |
382,196 | 11,302,168,265 | IssuesEvent | 2020-01-17 17:03:03 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | Deleting duplicate crashed DT | priority: high scope: UI | I created a virgin duplicate. Did some work on it.
Moved onto another image.
Went back to this duplicate and decided to delete it and got a backtrace and crash.
**darktable 3.1.0+320~g218dd5d8c win 10**
00000000679B4006 00000000B0A84FA0 0000000000000000 0000000000000000 libgtk-3-0.dll!gtk_widget_set_sensitive
000000006BA02112 0047004E0041004C 0045004700410055 000000000000CCCC libduplicate.dll!_lib_duplicate_init_callback [F:/msys64/home/chill/darktable/src/libs/duplicate.c @ 443]
441: if(count==1)
442: {
> 443: gtk_widget_set_sensitive(bt, FALSE);
444: gtk_widget_set_visible(bt, FALSE);
445: }
[darktable_bt_RUOAE0.txt](https://github.com/darktable-org/darktable/files/4035873/darktable_bt_RUOAE0.txt)
| 1.0 | Deleting duplicate crashed DT - I created a virgin duplicate. Did some work on it.
Moved onto another image.
Went back to this duplicate and decided to delete it and got a backtrace and crash.
**darktable 3.1.0+320~g218dd5d8c win 10**
00000000679B4006 00000000B0A84FA0 0000000000000000 0000000000000000 libgtk-3-0.dll!gtk_widget_set_sensitive
000000006BA02112 0047004E0041004C 0045004700410055 000000000000CCCC libduplicate.dll!_lib_duplicate_init_callback [F:/msys64/home/chill/darktable/src/libs/duplicate.c @ 443]
441: if(count==1)
442: {
> 443: gtk_widget_set_sensitive(bt, FALSE);
444: gtk_widget_set_visible(bt, FALSE);
445: }
[darktable_bt_RUOAE0.txt](https://github.com/darktable-org/darktable/files/4035873/darktable_bt_RUOAE0.txt)
| priority | deleting duplicate crashed dt i created a virgin duplicate did some work on it moved onto another image went back to this duplicate and decided to delete it and got a backtrace and crash darktable win libgtk dll gtk widget set sensitive libduplicate dll lib duplicate init callback if count gtk widget set sensitive bt false gtk widget set visible bt false | 1 |
682,752 | 23,356,032,434 | IssuesEvent | 2022-08-10 07:30:07 | ooni/probe | https://api.github.com/repos/ooni/probe | opened | netxlite: collect system resolver results using context | priority/high | This issue is about adding support inside netxlite for collecting system resolver results using a context. | 1.0 | netxlite: collect system resolver results using context - This issue is about adding support inside netxlite for collecting system resolver results using a context. | priority | netxlite collect system resolver results using context this issue is about adding support inside netxlite for collecting system resolver results using a context | 1 |
518,170 | 15,025,035,744 | IssuesEvent | 2021-02-01 20:29:16 | OpenPrinting/cups | https://api.github.com/repos/OpenPrinting/cups | closed | Regression: snprintf emulation function calls snprintf, causing recursion problem | bug platform issue priority-high | I was a little too eager to replace the sprintf calls with sprintf in cups/snprintf.c, need to make them sprintf again and add a comment so we don't do this again.
Also investigate options for not using snprintf emulation on Windows... :/
| 1.0 | Regression: snprintf emulation function calls snprintf, causing recursion problem - I was a little too eager to replace the sprintf calls with sprintf in cups/snprintf.c, need to make them sprintf again and add a comment so we don't do this again.
Also investigate options for not using snprintf emulation on Windows... :/
| priority | regression snprintf emulation function calls snprintf causing recursion problem i was a little too eager to replace the sprintf calls with sprintf in cups snprintf c need to make them sprintf again and add a comment so we don t do this again also investigate options for not using snprintf emulation on windows | 1 |
279,921 | 8,675,846,954 | IssuesEvent | 2018-11-30 12:14:47 | citusdata/citus | https://api.github.com/repos/citusdata/citus | closed | Grouping sets in pushdownable subqueries incorrectly allowed | bug priority:high warm-up | GROUPING SETS are not supported (see #2219), but they can incorrectly pass through in subqueries.
Top-level grouping sets error out:
```sql
SELECT x, y FROM test GROUP BY GROUPING SETS ((x), (y));
ERROR: could not run distributed query with GROUPING SETS, CUBE, or ROLLUP
HINT: Consider using an equality filter on the distributed table's partition column.
```
but subqueries may give incorrect results:
```sql
CREATE TABLE test (x int, y int);
SELECT create_distributed_table('test','x');
INSERT INTO test SELECT s, 1 FROM generate_series(1,10) s;
SELECT * FROM (SELECT x, y FROM test GROUP BY GROUPING SETS ((x), (y))) s;
x │ y
────┼───
8 │
│ 1
1 │
│ 1
10 │
│ 1
5 │
│ 1
4 │
7 │
│ 1
3 │
│ 1
6 │
│ 1
2 │
│ 1
9 │
│ 1
(19 rows)
```
The correct result has 11 rows:
```sql
CREATE TABLE local AS SELECT * FROM test;
SELECT * FROM (SELECT x, y FROM local GROUP BY GROUPING SETS ((x), (y))) s;
x │ y
────┼───
9 │
3 │
5 │
4 │
10 │
6 │
2 │
7 │
1 │
8 │
│ 1
(11 rows)
``` | 1.0 | Grouping sets in pushdownable subqueries incorrectly allowed - GROUPING SETS are not supported (see #2219), but they can incorrectly pass through in subqueries.
Top-level grouping sets error out:
```sql
SELECT x, y FROM test GROUP BY GROUPING SETS ((x), (y));
ERROR: could not run distributed query with GROUPING SETS, CUBE, or ROLLUP
HINT: Consider using an equality filter on the distributed table's partition column.
```
but subqueries may give incorrect results:
```sql
CREATE TABLE test (x int, y int);
SELECT create_distributed_table('test','x');
INSERT INTO test SELECT s, 1 FROM generate_series(1,10) s;
SELECT * FROM (SELECT x, y FROM test GROUP BY GROUPING SETS ((x), (y))) s;
x │ y
────┼───
8 │
│ 1
1 │
│ 1
10 │
│ 1
5 │
│ 1
4 │
7 │
│ 1
3 │
│ 1
6 │
│ 1
2 │
│ 1
9 │
│ 1
(19 rows)
```
The correct result has 11 rows:
```sql
CREATE TABLE local AS SELECT * FROM test;
SELECT * FROM (SELECT x, y FROM local GROUP BY GROUPING SETS ((x), (y))) s;
x │ y
────┼───
9 │
3 │
5 │
4 │
10 │
6 │
2 │
7 │
1 │
8 │
│ 1
(11 rows)
``` | priority | grouping sets in pushdownable subqueries incorrectly allowed grouping sets are not supported see but they can incorrectly pass through in subqueries top level grouping sets error out sql select x y from test group by grouping sets x y error could not run distributed query with grouping sets cube or rollup hint consider using an equality filter on the distributed table s partition column but subqueries may give incorrect results sql create table test x int y int select create distributed table test x insert into test select s from generate series s select from select x y from test group by grouping sets x y s x │ y ────┼─── │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ rows the correct result has rows sql create table local as select from test select from select x y from local group by grouping sets x y s x │ y ────┼─── │ │ │ │ │ │ │ │ │ │ │ rows | 1 |
750,697 | 26,212,775,503 | IssuesEvent | 2023-01-04 08:22:29 | MilesUpHQ/reddit-clone-rails | https://api.github.com/repos/MilesUpHQ/reddit-clone-rails | opened | Infinity scroll shows an extra link in between posts which is not required | bug High priority | <img width="783" alt="Screenshot 2023-01-04 at 1 47 39 PM" src="https://user-images.githubusercontent.com/84182/210513008-fe052942-c8b1-423e-b206-f81b529a8936.png">
| 1.0 | Infinity scroll shows an extra link in between posts which is not required - <img width="783" alt="Screenshot 2023-01-04 at 1 47 39 PM" src="https://user-images.githubusercontent.com/84182/210513008-fe052942-c8b1-423e-b206-f81b529a8936.png">
| priority | infinity scroll shows an extra link in between posts which is not required img width alt screenshot at pm src | 1 |
671,931 | 22,781,430,759 | IssuesEvent | 2022-07-08 20:17:13 | vertica/spark-connector | https://api.github.com/repos/vertica/spark-connector | closed | Spark 3.3.0 aggregation interface change. | bug High Priority | ## Description
Spark 3.3.0 updated `Aggregation.groupByColumns()` to `Aggregation.groupByExpressions()`, changing the return type to as well. This causes a compilation failure. Changes should ensure that the connector is backward compatible with previous Spark versions.
| 1.0 | Spark 3.3.0 aggregation interface change. - ## Description
Spark 3.3.0 updated `Aggregation.groupByColumns()` to `Aggregation.groupByExpressions()`, changing the return type to as well. This causes a compilation failure. Changes should ensure that the connector is backward compatible with previous Spark versions.
| priority | spark aggregation interface change description spark updated aggregation groupbycolumns to aggregation groupbyexpressions changing the return type to as well this causes a compilation failure changes should ensure that the connector is backward compatible with previous spark versions | 1 |
150,178 | 5,738,914,285 | IssuesEvent | 2017-04-23 09:45:33 | k0shk0sh/FastHub | https://api.github.com/repos/k0shk0sh/FastHub | closed | Not existing repository can generate an empty activity | Priority: High Status: Accepted Type: Bug | - FastHub Version: 1.9.1
- Android Version: Nougat 7.0
- Phone Model: SM-G925F
If FastHub gets a broken URL it can generate an activity that cannot be used from the user like this picture

| 1.0 | Not existing repository can generate an empty activity - - FastHub Version: 1.9.1
- Android Version: Nougat 7.0
- Phone Model: SM-G925F
If FastHub gets a broken URL it can generate an activity that cannot be used from the user like this picture

| priority | not existing repository can generate an empty activity fasthub version android version nougat phone model sm if fasthub gets a broken url it can generate an activity that cannot be used from the user like this picture | 1 |
221,521 | 7,389,506,801 | IssuesEvent | 2018-03-16 08:58:42 | wso2/product-sp | https://api.github.com/repos/wso2/product-sp | closed | Cookie Policy and Privacy Policy redirected to non-existing page in the portal. | Priority/Highest Severity/Critical Type/Bug | **Description:**
Cookie Policy and Privacy Policy redirected to non-existing page in the portal login[1].
[1]https://localhost:9643/portal/login
**Affected Product Version:**
4.1.0-RC2
| 1.0 | Cookie Policy and Privacy Policy redirected to non-existing page in the portal. - **Description:**
Cookie Policy and Privacy Policy redirected to non-existing page in the portal login[1].
[1]https://localhost:9643/portal/login
**Affected Product Version:**
4.1.0-RC2
| priority | cookie policy and privacy policy redirected to non existing page in the portal description cookie policy and privacy policy redirected to non existing page in the portal login affected product version | 1 |
217,765 | 7,328,037,208 | IssuesEvent | 2018-03-04 16:48:24 | sans-dfir/sift | https://api.github.com/repos/sans-dfir/sift | closed | [CLI] SIFT "upgrade" Not Working | area/os/xenial distro/sift-community kind/bug priority/high | I've downloaded the newest version of the SIFT CLI (1.5.1-beta.0-master.154cb2f). If I run "sift list-upgrades" I see this:
> List of available releases
- v2017.41.1
- v2017.36.0
- v2017.31.4
- v2017.31.3
- v2017.31.2
- v2017.29.0
If I run "**sift upgrade**", I see this:
> **sift-cli@1.5.1-beta.0-master.154cb2f**
> **sift-version: v2017.27.2**
**> downloading v2017.41.1**
>> downloading sift-saltstack-v2017.41.1.tar.gz.asc
>> downloading sift-saltstack-v2017.41.1.tar.gz.sha256
>> downloading sift-saltstack-v2017.41.1.tar.gz.sha256.asc
>> downloading sift-saltstack-v2017.41.1.tar.gz
> validating file sift-saltstack-v2017.41.1.tar.gz
> validating signature for sift-saltstack-v2017.41.1.tar.gz.sha256
> extracting update sift-saltstack-v2017.41.1.tar.gz
> performing update v2017.41.1
>> Log file: /var/cache/sift/cli/v2017.41.1/saltstack.log
...
[OUTPUT TRUNCATED]
...
**>> Completed with Failures -- Success: 490, Failure: 3**
I have reviewed the "saltstack.log" in /var/cache/sift/cli/v2017.41.1, but I'm not sure what I'm looking for. I saw nothing obviously wrong.
If I re-run "sift list-upgrades", even after it clearly states I'm still on v2017.27.2.
I seem to be stuck in a loop. Help? | 1.0 | [CLI] SIFT "upgrade" Not Working - I've downloaded the newest version of the SIFT CLI (1.5.1-beta.0-master.154cb2f). If I run "sift list-upgrades" I see this:
> List of available releases
- v2017.41.1
- v2017.36.0
- v2017.31.4
- v2017.31.3
- v2017.31.2
- v2017.29.0
If I run "**sift upgrade**", I see this:
> **sift-cli@1.5.1-beta.0-master.154cb2f**
> **sift-version: v2017.27.2**
**> downloading v2017.41.1**
>> downloading sift-saltstack-v2017.41.1.tar.gz.asc
>> downloading sift-saltstack-v2017.41.1.tar.gz.sha256
>> downloading sift-saltstack-v2017.41.1.tar.gz.sha256.asc
>> downloading sift-saltstack-v2017.41.1.tar.gz
> validating file sift-saltstack-v2017.41.1.tar.gz
> validating signature for sift-saltstack-v2017.41.1.tar.gz.sha256
> extracting update sift-saltstack-v2017.41.1.tar.gz
> performing update v2017.41.1
>> Log file: /var/cache/sift/cli/v2017.41.1/saltstack.log
...
[OUTPUT TRUNCATED]
...
**>> Completed with Failures -- Success: 490, Failure: 3**
I have reviewed the "saltstack.log" in /var/cache/sift/cli/v2017.41.1, but I'm not sure what I'm looking for. I saw nothing obviously wrong.
If I re-run "sift list-upgrades", even after it clearly states I'm still on v2017.27.2.
I seem to be stuck in a loop. Help? | priority | sift upgrade not working i ve downloaded the newest version of the sift cli beta master if i run sift list upgrades i see this list of available releases if i run sift upgrade i see this sift cli beta master sift version downloading downloading sift saltstack tar gz asc downloading sift saltstack tar gz downloading sift saltstack tar gz asc downloading sift saltstack tar gz validating file sift saltstack tar gz validating signature for sift saltstack tar gz extracting update sift saltstack tar gz performing update log file var cache sift cli saltstack log completed with failures success failure i have reviewed the saltstack log in var cache sift cli but i m not sure what i m looking for i saw nothing obviously wrong if i re run sift list upgrades even after it clearly states i m still on i seem to be stuck in a loop help | 1 |
32,725 | 2,759,382,140 | IssuesEvent | 2015-04-28 03:08:14 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | FedEx: Tracking link needs to be updated -- currently leads to "page has moved" | Bug Low-Hanging Fruit Priority: High | https://duckduckgo.com/?q=fedex+9241990100130206401644&ia=answer
Clicking the link we provide sends the user to a page which states:
```
Moved Permanently
The document has moved here.
```
We need to be sending them to:
https://www.fedex.com/apps/fedextrack/?action=track&tracknumbers=9241990100130206401644&action=track | 1.0 | FedEx: Tracking link needs to be updated -- currently leads to "page has moved" - https://duckduckgo.com/?q=fedex+9241990100130206401644&ia=answer
Clicking the link we provide sends the user to a page which states:
```
Moved Permanently
The document has moved here.
```
We need to be sending them to:
https://www.fedex.com/apps/fedextrack/?action=track&tracknumbers=9241990100130206401644&action=track | priority | fedex tracking link needs to be updated currently leads to page has moved clicking the link we provide sends the user to a page which states moved permanently the document has moved here we need to be sending them to | 1 |
432,460 | 12,493,736,071 | IssuesEvent | 2020-06-01 09:49:25 | wazuh/wazuh-docker | https://api.github.com/repos/wazuh/wazuh-docker | opened | Ensure persistency for agentless monitoring | priority/high status/in-progress type/bug | ### Description
As reported in #344 I set up a docker environment with a Mikrotik VM via agentless monitoring (with the`register_host.ssh` script), this script saves state to `/var/ossec/agentless/.passlist` which is not included in our persitent locations.
### Tasks
- [ ] Persist `agentless` state
- [ ] Test deploy & upgrade procedures | 1.0 | Ensure persistency for agentless monitoring - ### Description
As reported in #344 I set up a docker environment with a Mikrotik VM via agentless monitoring (with the`register_host.ssh` script), this script saves state to `/var/ossec/agentless/.passlist` which is not included in our persitent locations.
### Tasks
- [ ] Persist `agentless` state
- [ ] Test deploy & upgrade procedures | priority | ensure persistency for agentless monitoring description as reported in i set up a docker environment with a mikrotik vm via agentless monitoring with the register host ssh script this script saves state to var ossec agentless passlist which is not included in our persitent locations tasks persist agentless state test deploy upgrade procedures | 1 |
518,250 | 15,026,250,679 | IssuesEvent | 2021-02-01 22:21:30 | borgbase/vorta | https://api.github.com/repos/borgbase/vorta | closed | Vorta freezes | os:linux package:flatpak priority:high type:bug | **Describe the bug**
I upgraded from Vorta 0.7.1 to Vorta 0.7.2 (Flatpak). If I click on "add repository", "add folder" or "add files" the app freezes and **I had to kill it manually.** Don't know what happened: Vorta 0.7.1 had serious bug with file picker that was fixed (as stated in https://github.com/borgbase/vorta/issues/693). Don't know it's fixed or not: this new version is unusable.
Please let me know how can help you testing this useful software (maybe a Flatpak beta version?) but please, don't publish extremely buggy software. Thanks.
**To Reproduce**
Steps to reproduce the behavior:
_Please read above._
**Desktop (please complete the following information):**
- OS: Operating System: Fedora 33, KDE Plasma Version: 5.20.5, KDE Frameworks Version: 5.78.0, Qt Version: 5.15.2, Kernel Version: 5.10.10-200.fc33.x86_64, OS Type: 64-bit, Processors: 4 × Intel® Core™ i5-6600 CPU @ 3.30GHz, Memory: 15.3 GiB of RAM Graphics Processor: Mesa Intel® HD Graphics 530
- Vorta version: 0.7.2
- Installed from: Flathub
**Additional context**
Log:
2021-01-31 18:51:50,616 - apscheduler.scheduler - INFO - Scheduler started
2021-01-31 18:51:50,802 - root - INFO - Using NetworkManagerMonitor NetworkStatusMonitor implementation.
2021-01-31 18:51:50,861 - vorta.borg.borg_thread - INFO - Running command /app/bin/borg --version
| 1.0 | Vorta freezes - **Describe the bug**
I upgraded from Vorta 0.7.1 to Vorta 0.7.2 (Flatpak). If I click on "add repository", "add folder" or "add files" the app freezes and **I had to kill it manually.** Don't know what happened: Vorta 0.7.1 had serious bug with file picker that was fixed (as stated in https://github.com/borgbase/vorta/issues/693). Don't know it's fixed or not: this new version is unusable.
Please let me know how can help you testing this useful software (maybe a Flatpak beta version?) but please, don't publish extremely buggy software. Thanks.
**To Reproduce**
Steps to reproduce the behavior:
_Please read above._
**Desktop (please complete the following information):**
- OS: Operating System: Fedora 33, KDE Plasma Version: 5.20.5, KDE Frameworks Version: 5.78.0, Qt Version: 5.15.2, Kernel Version: 5.10.10-200.fc33.x86_64, OS Type: 64-bit, Processors: 4 × Intel® Core™ i5-6600 CPU @ 3.30GHz, Memory: 15.3 GiB of RAM Graphics Processor: Mesa Intel® HD Graphics 530
- Vorta version: 0.7.2
- Installed from: Flathub
**Additional context**
Log:
2021-01-31 18:51:50,616 - apscheduler.scheduler - INFO - Scheduler started
2021-01-31 18:51:50,802 - root - INFO - Using NetworkManagerMonitor NetworkStatusMonitor implementation.
2021-01-31 18:51:50,861 - vorta.borg.borg_thread - INFO - Running command /app/bin/borg --version
| priority | vorta freezes describe the bug i upgraded from vorta to vorta flatpak if i click on add repository add folder or add files the app freezes and i had to kill it manually don t know what happened vorta had serious bug with file picker that was fixed as stated in don t know it s fixed or not this new version is unusable please let me know how can help you testing this useful software maybe a flatpak beta version but please don t publish extremely buggy software thanks to reproduce steps to reproduce the behavior please read above desktop please complete the following information os operating system fedora kde plasma version kde frameworks version qt version kernel version os type bit processors × intel® core™ cpu memory gib of ram graphics processor mesa intel® hd graphics vorta version installed from flathub additional context log apscheduler scheduler info scheduler started root info using networkmanagermonitor networkstatusmonitor implementation vorta borg borg thread info running command app bin borg version | 1 |
355,473 | 10,581,111,456 | IssuesEvent | 2019-10-08 08:30:29 | AY1920S1-CS2113-T13-2/main | https://api.github.com/repos/AY1920S1-CS2113-T13-2/main | opened | As a nurse, I can exit the system... | priority.High | ...so that other people can not access my data when I am not around. | 1.0 | As a nurse, I can exit the system... - ...so that other people can not access my data when I am not around. | priority | as a nurse i can exit the system so that other people can not access my data when i am not around | 1 |
361,141 | 10,704,890,832 | IssuesEvent | 2019-10-24 12:42:51 | XiaoMi/hiui | https://api.github.com/repos/XiaoMi/hiui | closed | HIUI 1.5.x => 2.0.0 升级指南 | good first issue high priority | HIUI 2.0.0 是一次重大更新,在这次迭代中,我们重新梳理了所有组件的 API。现在的组件 API 都基于统一的规范设计,不同组件间属性名、参数甚至数据类型都趋于一致,使用起来学习成本更小。 😃
即便如此,我们依旧为 1.x 的用户提供了最大程度的向前支持。如果你的项目正在从 1.x 升级,请参考下面的升级指南。
https://xiaomi.github.io/hiui/zh-CN/docs/upgrade-from-1x | 1.0 | HIUI 1.5.x => 2.0.0 升级指南 - HIUI 2.0.0 是一次重大更新,在这次迭代中,我们重新梳理了所有组件的 API。现在的组件 API 都基于统一的规范设计,不同组件间属性名、参数甚至数据类型都趋于一致,使用起来学习成本更小。 😃
即便如此,我们依旧为 1.x 的用户提供了最大程度的向前支持。如果你的项目正在从 1.x 升级,请参考下面的升级指南。
https://xiaomi.github.io/hiui/zh-CN/docs/upgrade-from-1x | priority | hiui x 升级指南 hiui 是一次重大更新,在这次迭代中,我们重新梳理了所有组件的 api。现在的组件 api 都基于统一的规范设计,不同组件间属性名、参数甚至数据类型都趋于一致,使用起来学习成本更小。 😃 即便如此,我们依旧为 x 的用户提供了最大程度的向前支持。如果你的项目正在从 x 升级,请参考下面的升级指南。 | 1 |
321,112 | 9,793,705,653 | IssuesEvent | 2019-06-10 20:38:40 | infor-design/enterprise-ng | https://api.github.com/repos/infor-design/enterprise-ng | closed | XSS in modal dialog title | [2] priority: high type: bug :bug: type: patch | **Describe the bug**
Passing a script tag as input to the modal dialog title will execute the script when opening the modal.
**To Reproduce**
Steps to reproduce the behavior:
1. Open the modal-dialog demo.
2. Pass a script as input to the title. ('<script>alert("title");</script>' for example.)
3. Open any of the modals in the demo.
4. The script is executed before the modal is opened.
**Expected behavior**
The script is not executed.
**Version**
- ids-enterprise-ng: 5.3 (recreated in 5.5.0-dev as well)
**Screenshots**

**Platform**
- OS Version: Windows 10
- Browser Name: Chrome
- Browser Version: 74.0.3729.169 (Official Build) (64-bit)
| 1.0 | XSS in modal dialog title - **Describe the bug**
Passing a script tag as input to the modal dialog title will execute the script when opening the modal.
**To Reproduce**
Steps to reproduce the behavior:
1. Open the modal-dialog demo.
2. Pass a script as input to the title. ('<script>alert("title");</script>' for example.)
3. Open any of the modals in the demo.
4. The script is executed before the modal is opened.
**Expected behavior**
The script is not executed.
**Version**
- ids-enterprise-ng: 5.3 (recreated in 5.5.0-dev as well)
**Screenshots**

**Platform**
- OS Version: Windows 10
- Browser Name: Chrome
- Browser Version: 74.0.3729.169 (Official Build) (64-bit)
| priority | xss in modal dialog title describe the bug passing a script tag as input to the modal dialog title will execute the script when opening the modal to reproduce steps to reproduce the behavior open the modal dialog demo pass a script as input to the title alert title for example open any of the modals in the demo the script is executed before the modal is opened expected behavior the script is not executed version ids enterprise ng recreated in dev as well screenshots platform os version windows browser name chrome browser version official build bit | 1 |
154,197 | 5,915,577,364 | IssuesEvent | 2017-05-22 08:15:04 | dhowe/RiTa | https://api.github.com/repos/dhowe/RiTa | closed | Update documentation to match recent code changes | PRIORITY: High | - [x] remove removeWord() function
- [x] remove RiLexicon object from the API altogether
- [x] add RiLexicon methods to the 2nd column of the index page, labeled RiTa.addWord, RiTa.alliterations, etc.
- [x] add some kind of subtle header showing that these are lexicon-related functions | 1.0 | Update documentation to match recent code changes - - [x] remove removeWord() function
- [x] remove RiLexicon object from the API altogether
- [x] add RiLexicon methods to the 2nd column of the index page, labeled RiTa.addWord, RiTa.alliterations, etc.
- [x] add some kind of subtle header showing that these are lexicon-related functions | priority | update documentation to match recent code changes remove removeword function remove rilexicon object from the api altogether add rilexicon methods to the column of the index page labeled rita addword rita alliterations etc add some kind of subtle header showing that these are lexicon related functions | 1 |
182,724 | 6,673,128,465 | IssuesEvent | 2017-10-04 14:09:58 | Templarian/MaterialDesign | https://api.github.com/repos/Templarian/MaterialDesign | closed | Hulu Brand Icon | Brand Icon Contribution High Priority Icon Request | The Hulu "h" (without the box) from their icon would be awesome. We have Netflix, so I don't see why this shouldn't be added.

| 1.0 | Hulu Brand Icon - The Hulu "h" (without the box) from their icon would be awesome. We have Netflix, so I don't see why this shouldn't be added.

| priority | hulu brand icon the hulu h without the box from their icon would be awesome we have netflix so i don t see why this shouldn t be added | 1 |
639,419 | 20,753,152,636 | IssuesEvent | 2022-03-15 09:40:46 | woocommerce/pinterest-for-woocommerce | https://api.github.com/repos/woocommerce/pinterest-for-woocommerce | closed | Few api endpoint /v3/catalog/partner/connect/ requests are failing | type: bug priority: high | ### Describe the bug:
Reason: Pinterest allow users to claim domain with paths (e.g. https://xxz.com/en-US/ might be claimed by one user and https://xxz.com/en-en_GB/ claimed by another user)
When merchant domains are passed with path one verification step is failing
### Steps to reproduce:
<!-- Describe the steps to reproduce the behavior.-->
1. invoke /v3/catalog/partner/connect/ with merchant domain as domain + path
### Expected behavior:
merchant_domain parameter in request should only contain domain from Woo not with path
### Actual behavior:
merchant_domain parameter in request contain domain with path
Currently during domain verification plugin invokes following APIs
1. https://developers.pinterest.com/docs/redoc/#operation/v3_verification_code_GET to get verification code
2. https://developers.pinterest.com/docs/redoc/#operation/v3_verify_domain_POST to verify domain
This restricts verification to only domain level not with domain+path.
Please switch to following APIs
1. https://developers.pinterest.com/docs/redoc/#operation/v3_verification_code_for_website_GET to get verification code
2. https://developers.pinterest.com/docs/redoc/#operation/v3_verify_website_POST to verify website
Differences in domain and website API:
- Instead of hostname you need to pass get_home_url() which is passed in merchant connect.
- pass website in query parameter instead of path as per documentation
- can_claim_multiple param is not needed. can_claim_multiple is True for requests coming for website apis.
| 1.0 | Few api endpoint /v3/catalog/partner/connect/ requests are failing - ### Describe the bug:
Reason: Pinterest allow users to claim domain with paths (e.g. https://xxz.com/en-US/ might be claimed by one user and https://xxz.com/en-en_GB/ claimed by another user)
When merchant domains are passed with path one verification step is failing
### Steps to reproduce:
<!-- Describe the steps to reproduce the behavior.-->
1. invoke /v3/catalog/partner/connect/ with merchant domain as domain + path
### Expected behavior:
merchant_domain parameter in request should only contain domain from Woo not with path
### Actual behavior:
merchant_domain parameter in request contain domain with path
Currently during domain verification plugin invokes following APIs
1. https://developers.pinterest.com/docs/redoc/#operation/v3_verification_code_GET to get verification code
2. https://developers.pinterest.com/docs/redoc/#operation/v3_verify_domain_POST to verify domain
This restricts verification to only domain level not with domain+path.
Please switch to following APIs
1. https://developers.pinterest.com/docs/redoc/#operation/v3_verification_code_for_website_GET to get verification code
2. https://developers.pinterest.com/docs/redoc/#operation/v3_verify_website_POST to verify website
Differences in domain and website API:
- Instead of hostname you need to pass get_home_url() which is passed in merchant connect.
- pass website in query parameter instead of path as per documentation
- can_claim_multiple param is not needed. can_claim_multiple is True for requests coming for website apis.
| priority | few api endpoint catalog partner connect requests are failing describe the bug reason pinterest allow users to claim domain with paths e g might be claimed by one user and claimed by another user when merchant domains are passed with path one verification step is failing steps to reproduce invoke catalog partner connect with merchant domain as domain path expected behavior merchant domain parameter in request should only contain domain from woo not with path actual behavior merchant domain parameter in request contain domain with path currently during domain verification plugin invokes following apis to get verification code to verify domain this restricts verification to only domain level not with domain path please switch to following apis to get verification code to verify website differences in domain and website api instead of hostname you need to pass get home url which is passed in merchant connect pass website in query parameter instead of path as per documentation can claim multiple param is not needed can claim multiple is true for requests coming for website apis | 1 |
644,182 | 20,969,218,361 | IssuesEvent | 2022-03-28 09:45:39 | EBISPOT/goci | https://api.github.com/repos/EBISPOT/goci | closed | Investigating why this submission: https://www.ebi.ac.uk/gwas/depo-curation/submissions/617bbfa3921d1b000131a073 is blocked for editing from the depo-curation app | Priority: High Curation app | This submission
https://www.ebi.ac.uk/gwas/depo-curation/submissions/617bbfa3921d1b000131a073
is blocked for editing from the depo-curation app.
@sajo-ebi could you please take a look - this is a blocker for data release to start.
| 1.0 | Investigating why this submission: https://www.ebi.ac.uk/gwas/depo-curation/submissions/617bbfa3921d1b000131a073 is blocked for editing from the depo-curation app - This submission
https://www.ebi.ac.uk/gwas/depo-curation/submissions/617bbfa3921d1b000131a073
is blocked for editing from the depo-curation app.
@sajo-ebi could you please take a look - this is a blocker for data release to start.
| priority | investigating why this submission is blocked for editing from the depo curation app this submission is blocked for editing from the depo curation app sajo ebi could you please take a look this is a blocker for data release to start | 1 |
340,926 | 10,280,227,012 | IssuesEvent | 2019-08-26 04:14:11 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Incorrect redeclared symbol check logic for lambdas | Area/Language Priority/High Type/Bug | The below check is incorrect. The arguments should be swapped and the flags of the owner of the symbol should be checked.
https://github.com/ballerina-platform/ballerina-lang/blob/df71d42b7ccf294e8804f053ba8f8d024ab3f8ca/compiler/ballerina-lang/src/main/java/org/wso2/ballerinalang/compiler/semantics/analyzer/SymbolResolver.java#L262 | 1.0 | Incorrect redeclared symbol check logic for lambdas - The below check is incorrect. The arguments should be swapped and the flags of the owner of the symbol should be checked.
https://github.com/ballerina-platform/ballerina-lang/blob/df71d42b7ccf294e8804f053ba8f8d024ab3f8ca/compiler/ballerina-lang/src/main/java/org/wso2/ballerinalang/compiler/semantics/analyzer/SymbolResolver.java#L262 | priority | incorrect redeclared symbol check logic for lambdas the below check is incorrect the arguments should be swapped and the flags of the owner of the symbol should be checked | 1 |
216,069 | 7,300,977,889 | IssuesEvent | 2018-02-27 02:26:58 | distcc/distcc | https://api.github.com/repos/distcc/distcc | reopened | distcc tcp mode is a security risk [CVE 2004-2687] | Priority-High bug | http://www.cvedetails.com/cve/2004-2687
distcc currently has two modes by which clients can connect - over TCP (default) or SSH.
In TCP mode, distcc checks the client IP address against a whitelist, which (iirc) is required but can be set quite loosely. There is of course no guarantee that every user on a permitted client address is friendly.
Once the connection is established the client can reasonably easily manipulate the server into running arbitrary commands. This is automated in eg http://www.rapid7.com/db/modules/exploit/unix/misc/distcc_exec
Possible solutions include:
- Deprecating TCP mode and requiring SSH connections. This would largely push the problem onto SSH and make it more reasonable for distccd to trust the client. However SSH does have a noticeable performance overhead, or to be more precise it did when I measured 10 years ago. It may have changed. It may cause problems with server discovery.
- Use some other transport that provides authentication/integrity/encryption but is faster than SSH.
- Add a simple authentication protocol. This may be cheaper than SSH. There is a risk the authentication protocol itself would be buggy. If there's only authentication of thie client without integrity checks or encryption, it may remain vulnerable to malicious networks.
- Using platform-specific security features such as `seccomp_bpf` to restrict what can be done once the compiler command is launched. This might be a useful adjunct to other measures but is probably not enough, and may be brittle.
| 1.0 | distcc tcp mode is a security risk [CVE 2004-2687] - http://www.cvedetails.com/cve/2004-2687
distcc currently has two modes by which clients can connect - over TCP (default) or SSH.
In TCP mode, distcc checks the client IP address against a whitelist, which (iirc) is required but can be set quite loosely. There is of course no guarantee that every user on a permitted client address is friendly.
Once the connection is established the client can reasonably easily manipulate the server into running arbitrary commands. This is automated in eg http://www.rapid7.com/db/modules/exploit/unix/misc/distcc_exec
Possible solutions include:
- Deprecating TCP mode and requiring SSH connections. This would largely push the problem onto SSH and make it more reasonable for distccd to trust the client. However SSH does have a noticeable performance overhead, or to be more precise it did when I measured 10 years ago. It may have changed. It may cause problems with server discovery.
- Use some other transport that provides authentication/integrity/encryption but is faster than SSH.
- Add a simple authentication protocol. This may be cheaper than SSH. There is a risk the authentication protocol itself would be buggy. If there's only authentication of thie client without integrity checks or encryption, it may remain vulnerable to malicious networks.
- Using platform-specific security features such as `seccomp_bpf` to restrict what can be done once the compiler command is launched. This might be a useful adjunct to other measures but is probably not enough, and may be brittle.
| priority | distcc tcp mode is a security risk distcc currently has two modes by which clients can connect over tcp default or ssh in tcp mode distcc checks the client ip address against a whitelist which iirc is required but can be set quite loosely there is of course no guarantee that every user on a permitted client address is friendly once the connection is established the client can reasonably easily manipulate the server into running arbitrary commands this is automated in eg possible solutions include deprecating tcp mode and requiring ssh connections this would largely push the problem onto ssh and make it more reasonable for distccd to trust the client however ssh does have a noticeable performance overhead or to be more precise it did when i measured years ago it may have changed it may cause problems with server discovery use some other transport that provides authentication integrity encryption but is faster than ssh add a simple authentication protocol this may be cheaper than ssh there is a risk the authentication protocol itself would be buggy if there s only authentication of thie client without integrity checks or encryption it may remain vulnerable to malicious networks using platform specific security features such as seccomp bpf to restrict what can be done once the compiler command is launched this might be a useful adjunct to other measures but is probably not enough and may be brittle | 1 |
610,438 | 18,907,502,295 | IssuesEvent | 2021-11-16 10:37:42 | P7-Team/P7-Client | https://api.github.com/repos/P7-Team/P7-Client | closed | As a Provider, I would like to be able to receive Tasks | enhancement Provider Priority:High | **Describe the solution you'd like**
A Provider should be able to receive a Task in such a way that it can start working on it
**Sub tasks**
- [x] P7-Team/P7-WebService#19
- [x] #49
| 1.0 | As a Provider, I would like to be able to receive Tasks - **Describe the solution you'd like**
A Provider should be able to receive a Task in such a way that it can start working on it
**Sub tasks**
- [x] P7-Team/P7-WebService#19
- [x] #49
| priority | as a provider i would like to be able to receive tasks describe the solution you d like a provider should be able to receive a task in such a way that it can start working on it sub tasks team webservice | 1 |
296,097 | 9,104,317,282 | IssuesEvent | 2019-02-20 17:51:42 | boutiques/boutiques | https://api.github.com/repos/boutiques/boutiques | closed | Add a property for online platform(s) having the tool installed | enhancement high priority | Add a property `online-platforms` containing a list of URLs where the descriptor is installed and available. Upon publication, the elements in this list should be added to the 'hasPart' list in Zenodo. That would allow users to be pointed to CBRAIN, VIP and other platforms where the descriptor can be found. As discussed with @shots47s during the CONP meeting today. | 1.0 | Add a property for online platform(s) having the tool installed - Add a property `online-platforms` containing a list of URLs where the descriptor is installed and available. Upon publication, the elements in this list should be added to the 'hasPart' list in Zenodo. That would allow users to be pointed to CBRAIN, VIP and other platforms where the descriptor can be found. As discussed with @shots47s during the CONP meeting today. | priority | add a property for online platform s having the tool installed add a property online platforms containing a list of urls where the descriptor is installed and available upon publication the elements in this list should be added to the haspart list in zenodo that would allow users to be pointed to cbrain vip and other platforms where the descriptor can be found as discussed with during the conp meeting today | 1 |
193,885 | 6,888,929,033 | IssuesEvent | 2017-11-22 08:32:27 | HAS-CRM/IssueTracker | https://api.github.com/repos/HAS-CRM/IssueTracker | closed | Send to EBM - Payment Term Validation not wroking | Priority.High Status.Accepted Type.Bug | Background
- Due to recent addition of branch for payment terms, payment term was duplicated for different branches.
- Validation failed when account payment term **ID** differs from user selected payment term **ID** due to different branches selected
Eg: KYPS -> Approved account payment term with branch : Singapore -> ID = 123456
User Selected -> Payment term (Branch: Malaysia)-> ID = 23456
Validation will fail as current logic matches using GUID | 1.0 | Send to EBM - Payment Term Validation not wroking - Background
- Due to recent addition of branch for payment terms, payment term was duplicated for different branches.
- Validation failed when account payment term **ID** differs from user selected payment term **ID** due to different branches selected
Eg: KYPS -> Approved account payment term with branch : Singapore -> ID = 123456
User Selected -> Payment term (Branch: Malaysia)-> ID = 23456
Validation will fail as current logic matches using GUID | priority | send to ebm payment term validation not wroking background due to recent addition of branch for payment terms payment term was duplicated for different branches validation failed when account payment term id differs from user selected payment term id due to different branches selected eg kyps approved account payment term with branch singapore id user selected payment term branch malaysia id validation will fail as current logic matches using guid | 1 |
199,843 | 6,994,917,171 | IssuesEvent | 2017-12-15 17:05:10 | emory-libraries/ezpaarse-platforms | https://api.github.com/repos/emory-libraries/ezpaarse-platforms | closed | American College of Physicians | Additional Parser enhancement High Priority review | Example Domains:
annals.org
login.acponline.org
www.acponline.org
store.acponline.org
Priority: High
| 1.0 | American College of Physicians - Example Domains:
annals.org
login.acponline.org
www.acponline.org
store.acponline.org
Priority: High
| priority | american college of physicians example domains annals org login acponline org store acponline org priority high | 1 |
361,932 | 10,721,493,081 | IssuesEvent | 2019-10-27 03:08:04 | AY1920S1-CS2103T-F12-2/main | https://api.github.com/repos/AY1920S1-CS2103T-F12-2/main | closed | Reminder-System: Implementation | priority.High status.Ongoing type.Epic v1.3 | Create a way for users to add reminders that display as a "message of the day" when the application is launched. | 1.0 | Reminder-System: Implementation - Create a way for users to add reminders that display as a "message of the day" when the application is launched. | priority | reminder system implementation create a way for users to add reminders that display as a message of the day when the application is launched | 1 |
427,467 | 12,395,635,266 | IssuesEvent | 2020-05-20 19:01:20 | ChainSafe/forest | https://api.github.com/repos/ChainSafe/forest | closed | Convert relevant methods in the ChainStore to functions | Blockchain Priority: 2 - High RPC | **Issue summary**
<!-- A clear and concise description of what the task is. -->
This is related to building out the RPC.
We don't want to pass around the ChainStore. We want to pass around the database intstead. But the methods on the ChainStore are very useful as a lot of them match 1 to 1 with the RPC API. The nice thing is, most of those methods do not operate on data from the ChainStore itself, only the DB. So we can create functions that have almost the same logic but takes in DB instead of ChainStore. Then we also convert the Chainstore methods to call these functions instead.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 --> | 1.0 | Convert relevant methods in the ChainStore to functions - **Issue summary**
<!-- A clear and concise description of what the task is. -->
This is related to building out the RPC.
We don't want to pass around the ChainStore. We want to pass around the database intstead. But the methods on the ChainStore are very useful as a lot of them match 1 to 1 with the RPC API. The nice thing is, most of those methods do not operate on data from the ChainStore itself, only the DB. So we can create functions that have almost the same logic but takes in DB instead of ChainStore. Then we also convert the Chainstore methods to call these functions instead.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 --> | priority | convert relevant methods in the chainstore to functions issue summary this is related to building out the rpc we don t want to pass around the chainstore we want to pass around the database intstead but the methods on the chainstore are very useful as a lot of them match to with the rpc api the nice thing is most of those methods do not operate on data from the chainstore itself only the db so we can create functions that have almost the same logic but takes in db instead of chainstore then we also convert the chainstore methods to call these functions instead other information and links | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.