Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
721,163
| 24,819,960,830
|
IssuesEvent
|
2022-10-25 15:40:56
|
KinsonDigital/Velaptor
|
https://api.github.com/repos/KinsonDigital/Velaptor
|
closed
|
🚧Create system for logging errors, warnings, and events
|
✨new feature high priority preview
|
### I have done the items below . . .
- [X] I have updated the title by replacing the '**_<title_**>' section.
### Description
Create a logging system for logging exceptions, errors, and events that occur throughout the system. The system should be able to log the type of logs such as an event, exception, or warning, or info.
---
**Types of output:**
The system should also have different kinds of outputs for logging. Such as the console, log files, etc.
The minimum should be log files in the application directory and the console.
---
**When, how, and if things are logged:**
The system should also have logging turned off by default for **release** versions but turned on by default for **debug** versions. The various kinds of logging can be turned on and off manually by changing/adding settings and values to the settings file.
**Logging Settings:**
1. LoggingEnabled = true | false
- This will enable or disable ALL logging with the logging system once implemented.
2. ConsoleLoggingEnabled = true | false
- Will log errors and warnings to the console if **_true_** AND if **_LoggingEnabled = true_**
3. FileLoggingEnabled = true | false
- Will log errors and warnings to a log file if **_true_** AND if **_LoggingEnabled = true_**
4. LogWarningsEnabled = true | false
- Will log warnings if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if _**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
5. LogErrorsEnabled = true | false
- Will log errors if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if _**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
6. LogEventsEnabled = true | false
- Will log events if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if
_**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
- Events can be logged via direct logging service call or by a custom attribute on a method.
7. LogInfoEnabled = true | false
- Will log info if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if _**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
### Acceptance Criteria
**This issue is finished when:**
- [x] Logging service created
- The app settings service will be injected into the logging service
- Checking for the app setting will be done internally with the logging methods of the logger class. This way logic for checking log settings does not have to be written throughout the application. The logging service will have read-only properties that hold the logging settings.
- The logging service will be a singleton. This means the logging settings will be loaded once and saved for the application's lifetime.
- [x] All types of logging can be turned on and off
- [x] Logging to the console can be turned on and off
- Logging to the console should be displayed in the console app that runs along side of the **_VelaptorTesting_** application.
- [x] Logging to a file can be turned on and off
- Logging Of Warnings
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Yellow in color in the console
- Logging Of Errors
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Red in color in console
- Logging Of Events
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Cyan in color in the console
- Logging Of Info
- [x] Can be turned on and off
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Gray or white in color in console
- OpenGL Logging
- [x] Change the `GLInvoker.DebugCallback()` implementation to log warnings and errors.
- [x] Code documentation added if required
- [x] Unit tests added
- [x] All unit tests pass
### ToDo Items
- [x] Draft pull request created and linked to this issue
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [x] Issue linked to the proper project
- [X] Issue linked to proper milestone
### Issue Dependencies
- #248
### Related Work
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
🚧Create system for logging errors, warnings, and events - ### I have done the items below . . .
- [X] I have updated the title by replacing the '**_<title_**>' section.
### Description
Create a logging system for logging exceptions, errors, and events that occur throughout the system. The system should be able to log the type of logs such as an event, exception, or warning, or info.
---
**Types of output:**
The system should also have different kinds of outputs for logging. Such as the console, log files, etc.
The minimum should be log files in the application directory and the console.
---
**When, how, and if things are logged:**
The system should also have logging turned off by default for **release** versions but turned on by default for **debug** versions. The various kinds of logging can be turned on and off manually by changing/adding settings and values to the settings file.
**Logging Settings:**
1. LoggingEnabled = true | false
- This will enable or disable ALL logging with the logging system once implemented.
2. ConsoleLoggingEnabled = true | false
- Will log errors and warnings to the console if **_true_** AND if **_LoggingEnabled = true_**
3. FileLoggingEnabled = true | false
- Will log errors and warnings to a log file if **_true_** AND if **_LoggingEnabled = true_**
4. LogWarningsEnabled = true | false
- Will log warnings if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if _**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
5. LogErrorsEnabled = true | false
- Will log errors if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if _**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
6. LogEventsEnabled = true | false
- Will log events if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if
_**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
- Events can be logged via direct logging service call or by a custom attribute on a method.
7. LogInfoEnabled = true | false
- Will log info if **_true_** AND if **_LoggingEnabled = true_**. Will log to console and/or file if _**ConsoleLoggingEnabled**_ and/or _**FileLoggingEnabled**_ are enabled.
### Acceptance Criteria
**This issue is finished when:**
- [x] Logging service created
- The app settings service will be injected into the logging service
- Checking for the app setting will be done internally with the logging methods of the logger class. This way logic for checking log settings does not have to be written throughout the application. The logging service will have read-only properties that hold the logging settings.
- The logging service will be a singleton. This means the logging settings will be loaded once and saved for the application's lifetime.
- [x] All types of logging can be turned on and off
- [x] Logging to the console can be turned on and off
- Logging to the console should be displayed in the console app that runs along side of the **_VelaptorTesting_** application.
- [x] Logging to a file can be turned on and off
- Logging Of Warnings
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Yellow in color in the console
- Logging Of Errors
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Red in color in console
- Logging Of Events
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Cyan in color in the console
- Logging Of Info
- [x] Can be turned on and off
- [x] Will not be logged if _**LoggingEnabled**_ is off.
- [x] Will not be logged to the console if _**ConsoleLoggingEnabled**_ is off
- [x] Will not be logged to a file if _**FileLoggingEnabled**_ is off
- [x] Gray or white in color in console
- OpenGL Logging
- [x] Change the `GLInvoker.DebugCallback()` implementation to log warnings and errors.
- [x] Code documentation added if required
- [x] Unit tests added
- [x] All unit tests pass
### ToDo Items
- [x] Draft pull request created and linked to this issue
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [x] Issue linked to the proper project
- [X] Issue linked to proper milestone
### Issue Dependencies
- #248
### Related Work
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
priority
|
🚧create system for logging errors warnings and events i have done the items below i have updated the title by replacing the section description create a logging system for logging exceptions errors and events that occur throughout the system the system should be able to log the type of logs such as an event exception or warning or info types of output the system should also have different kinds of outputs for logging such as the console log files etc the minimum should be log files in the application directory and the console when how and if things are logged the system should also have logging turned off by default for release versions but turned on by default for debug versions the various kinds of logging can be turned on and off manually by changing adding settings and values to the settings file logging settings loggingenabled true false this will enable or disable all logging with the logging system once implemented consoleloggingenabled true false will log errors and warnings to the console if true and if loggingenabled true fileloggingenabled true false will log errors and warnings to a log file if true and if loggingenabled true logwarningsenabled true false will log warnings if true and if loggingenabled true will log to console and or file if consoleloggingenabled and or fileloggingenabled are enabled logerrorsenabled true false will log errors if true and if loggingenabled true will log to console and or file if consoleloggingenabled and or fileloggingenabled are enabled logeventsenabled true false will log events if true and if loggingenabled true will log to console and or file if consoleloggingenabled and or fileloggingenabled are enabled events can be logged via direct logging service call or by a custom attribute on a method loginfoenabled true false will log info if true and if loggingenabled true will log to console and or file if consoleloggingenabled and or fileloggingenabled are enabled acceptance criteria this issue is finished when logging service created the app settings service will be injected into the logging service checking for the app setting will be done internally with the logging methods of the logger class this way logic for checking log settings does not have to be written throughout the application the logging service will have read only properties that hold the logging settings the logging service will be a singleton this means the logging settings will be loaded once and saved for the application s lifetime all types of logging can be turned on and off logging to the console can be turned on and off logging to the console should be displayed in the console app that runs along side of the velaptortesting application logging to a file can be turned on and off logging of warnings will not be logged if loggingenabled is off will not be logged to the console if consoleloggingenabled is off will not be logged to a file if fileloggingenabled is off yellow in color in the console logging of errors will not be logged if loggingenabled is off will not be logged to the console if consoleloggingenabled is off will not be logged to a file if fileloggingenabled is off red in color in console logging of events will not be logged if loggingenabled is off will not be logged to the console if consoleloggingenabled is off will not be logged to a file if fileloggingenabled is off cyan in color in the console logging of info can be turned on and off will not be logged if loggingenabled is off will not be logged to the console if consoleloggingenabled is off will not be logged to a file if fileloggingenabled is off gray or white in color in console opengl logging change the glinvoker debugcallback implementation to log warnings and errors code documentation added if required unit tests added all unit tests pass todo items draft pull request created and linked to this issue priority label added to issue low priority medium priority or high priority issue linked to the proper project issue linked to proper milestone issue dependencies related work no response code of conduct i agree to follow this project s code of conduct
| 1
|
217,732
| 7,327,804,037
|
IssuesEvent
|
2018-03-04 14:30:44
|
goby-lang/goby
|
https://api.github.com/repos/goby-lang/goby
|
closed
|
Add integration tests for simple server
|
Priority High VM in progress
|
Since simple server is one of our big features, it should be covered with integration tests to prevent regression
|
1.0
|
Add integration tests for simple server - Since simple server is one of our big features, it should be covered with integration tests to prevent regression
|
priority
|
add integration tests for simple server since simple server is one of our big features it should be covered with integration tests to prevent regression
| 1
|
825,704
| 31,467,639,403
|
IssuesEvent
|
2023-08-30 04:16:39
|
chimple/cuba
|
https://api.github.com/repos/chimple/cuba
|
closed
|
Error in phone number login in few device.
|
bug High Priority
|
- Here in some devices when we try phone number login we will get recaptcha errors.



|
1.0
|
Error in phone number login in few device. - - Here in some devices when we try phone number login we will get recaptcha errors.



|
priority
|
error in phone number login in few device here in some devices when we try phone number login we will get recaptcha errors
| 1
|
687,947
| 23,543,479,014
|
IssuesEvent
|
2022-08-20 19:17:13
|
amplication/amplication
|
https://api.github.com/repos/amplication/amplication
|
closed
|
🐛 Bug Report: Commit page crashing
|
type: bug priority: high @amplication/client
|
### What happened?
When you enter the commit page of a resource, the page crashes with
<img width="1512" alt="Screen Shot 2022-08-18 at 12 08 05" src="https://user-images.githubusercontent.com/61761153/185379295-f0301baa-ec36-4c16-91cc-17d1330437aa.png">
### What you expected to happen
Its needs to show the commit of the resource
### How to reproduce
In v 0.15.0
Generate an resource and enter the commits page of the resource, it will crash
### Amplication version
0.15.0
### Environment
_No response_
### Are you willing to submit PR?
Yes I am willing to submit a PR!
|
1.0
|
🐛 Bug Report: Commit page crashing - ### What happened?
When you enter the commit page of a resource, the page crashes with
<img width="1512" alt="Screen Shot 2022-08-18 at 12 08 05" src="https://user-images.githubusercontent.com/61761153/185379295-f0301baa-ec36-4c16-91cc-17d1330437aa.png">
### What you expected to happen
Its needs to show the commit of the resource
### How to reproduce
In v 0.15.0
Generate an resource and enter the commits page of the resource, it will crash
### Amplication version
0.15.0
### Environment
_No response_
### Are you willing to submit PR?
Yes I am willing to submit a PR!
|
priority
|
🐛 bug report commit page crashing what happened when you enter the commit page of a resource the page crashes with img width alt screen shot at src what you expected to happen its needs to show the commit of the resource how to reproduce in v generate an resource and enter the commits page of the resource it will crash amplication version environment no response are you willing to submit pr yes i am willing to submit a pr
| 1
|
45,851
| 2,941,441,494
|
IssuesEvent
|
2015-07-02 07:59:23
|
gama-platform/gama
|
https://api.github.com/repos/gama-platform/gama
|
closed
|
Error : ArrayIndexOutOfBoundsException after have modify one line in a csv file
|
> Bug Affects Datafiles Concerns Persistence OS All Priority High Version Git
|
Models : SampleData from genstar
gaml : sample_data_conversion
- Execute expirement, it's OK.
- Check number of people in file : resultingDistribution1.csv - > e.g. : "60 agriculteur"
- Add a new line in csv file : includes/PICURS_People_SampleData.csv e.g. : "51:60 true agriculteur"
- Try to execute again, now -> error : ArrayIndexOutOfBoundsException
- Delete the project without it delete in the disk -> i.e. : no check "Delete project contents on disk"
- Re-import the same project.
- Execute experiment -> it's OK
- Check in file : resultingDistribution1.csv - > we see now "61 agriculteur"
Gama 1.7, version Git. Windows 8.1. JDK 1.8
|
1.0
|
Error : ArrayIndexOutOfBoundsException after have modify one line in a csv file - Models : SampleData from genstar
gaml : sample_data_conversion
- Execute expirement, it's OK.
- Check number of people in file : resultingDistribution1.csv - > e.g. : "60 agriculteur"
- Add a new line in csv file : includes/PICURS_People_SampleData.csv e.g. : "51:60 true agriculteur"
- Try to execute again, now -> error : ArrayIndexOutOfBoundsException
- Delete the project without it delete in the disk -> i.e. : no check "Delete project contents on disk"
- Re-import the same project.
- Execute experiment -> it's OK
- Check in file : resultingDistribution1.csv - > we see now "61 agriculteur"
Gama 1.7, version Git. Windows 8.1. JDK 1.8
|
priority
|
error arrayindexoutofboundsexception after have modify one line in a csv file models sampledata from genstar gaml sample data conversion execute expirement it s ok check number of people in file csv e g agriculteur add a new line in csv file includes picurs people sampledata csv e g true agriculteur try to execute again now error arrayindexoutofboundsexception delete the project without it delete in the disk i e no check delete project contents on disk re import the same project execute experiment it s ok check in file csv we see now agriculteur gama version git windows jdk
| 1
|
74,969
| 3,453,654,097
|
IssuesEvent
|
2015-12-17 12:27:20
|
CoderDojo/community-platform
|
https://api.github.com/repos/CoderDojo/community-platform
|
opened
|
Formatting on badge and events data
|
badges bug high priority profiles/users
|
For some reason, we seem to have some strange json formatting in the database in a couple of places.
Here's an example from the dates field on the `cd_events` table: `{"{\"startTime\":\"2015-09-23T18:00:00.083Z\",\"endTime\":\"2015-09-23T19:30:00.083Z\"}"}`
As opposed to this value in the city field: `{"toponymName":"Dublin"}`
The other field that needs to be cleaned up is the `badges` field on the `cd_profiles` table
```
badges | {"{\"id\":43,\"slug\":\"mentor-badge\",\"name\":\"Mentor Badge\",\"strapline\":\"This Badge is awarded to CoderDojo community members who share their coding knowledge with others by Mentoring at CoderDojo.\",\"earnerDescription\":\"This badge signifies that you want to make the world a better place by sharing your knowledge with others by Mentoring at a CoderDojo in your local community.\\r\\n\\r\\nTo apply: You are eligible for this badge once you've Mentored at two CoderDojo sessions or approximately 4 hours of Mentoring time at your Dojo. You can request this badge from your CoderDojo champion.\",\"consumerDescription\":\"The owner of this badge has chosen to share their coding knowledge with others by Mentoring at their CoderDojo. This is no easy task and they have excelled! By sharing their coding knowledge they are showing others that coding is a force to change the world!\",\"issuerUrl\":\"https://coderdojo.com/\",\"rubricUrl\":\"\",\"timeValue\":4,\"timeUnits\":\"hours\",\"limit\":0,\"unique\":0,\"created\":\"2015-12-01T14:44:01.000Z\",\"imageUrl\":\"http://badgekit.coderdojo.com:80/images/badge/50\",\"type\":\"Skill\",\"archived\":false,\"system\":{\"id\":1,\"slug\":\"coderdojo\",\"url\":\"http://52.17.20.218\",\"name\":\"CoderDojo\",\"email\":null,\"imageUrl\":null,\"issuers\":[]},\"criteriaUrl\":\"http://badgekit.coderdojo.com:80/system/coderdojo/badge/mentor-badge/criteria\",\"criteria\":[{\"id\":37,\"description\":\"The awardee of this badge has to have Mentored at two CoderDojo sessions or approximately 4 hours of Mentoring time.\",\"required\":1,\"note\":\"\"}],\"alignments\":[],\"evidenceType\":null,\"categories\":[\"Community Action\",\"Coding and Gaming\"],\"tags\":[{\"id\":145,\"value\":\"soft-skills\"},{\"id\":146,\"value\":\"Mentor\"}],\"milestones\":[],\"$$hashKey\":\"object:80\",\"status\":\"accepted\",\"dateAccepted\":\"2015-12-08T16:11:37.035Z\",\"assertion\":{\"uid\":\"coderdojo-9c9ed75a-660f-4a2e-a92e-f267c161c34d43\",\"recipient\":{\"identity\":\"pete@coderdojo.org\",\"type\":\"email\",\"hashed\":false},\"badge\":\"http://badgekit.coderdojo.com:8080/public/systems/coderdojo/badges/mentor-badge\",\"verify\":{\"url\":\"https://zen.coderdojo.com/api/1.0/verify_badge/9c9ed75a-660f-4a2e-a92e-f267c161c34d/43/assertion\",\"type\":\"hosted\"},\"issuedOn\":\"2015-12-08T16:11:37.035Z\"}}"}
```
The way these fields are formatted is restricting us in the way we do statistics so we need to sort this out ASAP!
|
1.0
|
Formatting on badge and events data - For some reason, we seem to have some strange json formatting in the database in a couple of places.
Here's an example from the dates field on the `cd_events` table: `{"{\"startTime\":\"2015-09-23T18:00:00.083Z\",\"endTime\":\"2015-09-23T19:30:00.083Z\"}"}`
As opposed to this value in the city field: `{"toponymName":"Dublin"}`
The other field that needs to be cleaned up is the `badges` field on the `cd_profiles` table
```
badges | {"{\"id\":43,\"slug\":\"mentor-badge\",\"name\":\"Mentor Badge\",\"strapline\":\"This Badge is awarded to CoderDojo community members who share their coding knowledge with others by Mentoring at CoderDojo.\",\"earnerDescription\":\"This badge signifies that you want to make the world a better place by sharing your knowledge with others by Mentoring at a CoderDojo in your local community.\\r\\n\\r\\nTo apply: You are eligible for this badge once you've Mentored at two CoderDojo sessions or approximately 4 hours of Mentoring time at your Dojo. You can request this badge from your CoderDojo champion.\",\"consumerDescription\":\"The owner of this badge has chosen to share their coding knowledge with others by Mentoring at their CoderDojo. This is no easy task and they have excelled! By sharing their coding knowledge they are showing others that coding is a force to change the world!\",\"issuerUrl\":\"https://coderdojo.com/\",\"rubricUrl\":\"\",\"timeValue\":4,\"timeUnits\":\"hours\",\"limit\":0,\"unique\":0,\"created\":\"2015-12-01T14:44:01.000Z\",\"imageUrl\":\"http://badgekit.coderdojo.com:80/images/badge/50\",\"type\":\"Skill\",\"archived\":false,\"system\":{\"id\":1,\"slug\":\"coderdojo\",\"url\":\"http://52.17.20.218\",\"name\":\"CoderDojo\",\"email\":null,\"imageUrl\":null,\"issuers\":[]},\"criteriaUrl\":\"http://badgekit.coderdojo.com:80/system/coderdojo/badge/mentor-badge/criteria\",\"criteria\":[{\"id\":37,\"description\":\"The awardee of this badge has to have Mentored at two CoderDojo sessions or approximately 4 hours of Mentoring time.\",\"required\":1,\"note\":\"\"}],\"alignments\":[],\"evidenceType\":null,\"categories\":[\"Community Action\",\"Coding and Gaming\"],\"tags\":[{\"id\":145,\"value\":\"soft-skills\"},{\"id\":146,\"value\":\"Mentor\"}],\"milestones\":[],\"$$hashKey\":\"object:80\",\"status\":\"accepted\",\"dateAccepted\":\"2015-12-08T16:11:37.035Z\",\"assertion\":{\"uid\":\"coderdojo-9c9ed75a-660f-4a2e-a92e-f267c161c34d43\",\"recipient\":{\"identity\":\"pete@coderdojo.org\",\"type\":\"email\",\"hashed\":false},\"badge\":\"http://badgekit.coderdojo.com:8080/public/systems/coderdojo/badges/mentor-badge\",\"verify\":{\"url\":\"https://zen.coderdojo.com/api/1.0/verify_badge/9c9ed75a-660f-4a2e-a92e-f267c161c34d/43/assertion\",\"type\":\"hosted\"},\"issuedOn\":\"2015-12-08T16:11:37.035Z\"}}"}
```
The way these fields are formatted is restricting us in the way we do statistics so we need to sort this out ASAP!
|
priority
|
formatting on badge and events data for some reason we seem to have some strange json formatting in the database in a couple of places here s an example from the dates field on the cd events table starttime endtime as opposed to this value in the city field toponymname dublin the other field that needs to be cleaned up is the badges field on the cd profiles table badges id slug mentor badge name mentor badge strapline this badge is awarded to coderdojo community members who share their coding knowledge with others by mentoring at coderdojo earnerdescription this badge signifies that you want to make the world a better place by sharing your knowledge with others by mentoring at a coderdojo in your local community r n r nto apply you are eligible for this badge once you ve mentored at two coderdojo sessions or approximately hours of mentoring time at your dojo you can request this badge from your coderdojo champion consumerdescription the owner of this badge has chosen to share their coding knowledge with others by mentoring at their coderdojo this is no easy task and they have excelled by sharing their coding knowledge they are showing others that coding is a force to change the world issuerurl criteriaurl alignments evidencetype null categories tags milestones hashkey object status accepted dateaccepted assertion uid coderdojo recipient identity pete coderdojo org type email hashed false badge the way these fields are formatted is restricting us in the way we do statistics so we need to sort this out asap
| 1
|
685,213
| 23,447,969,363
|
IssuesEvent
|
2022-08-15 21:50:23
|
pokt-network/pocket
|
https://api.github.com/repos/pokt-network/pocket
|
closed
|
[Persistence] V1 Persistence Foundation
|
core integration persistence priority:high
|
# Objective
A basic SQL-based implementation of the Persistence module specification to enable the development of the rest of the Pocket Node.
# Origin Document
[Pocket protocol persistence specification](https://github.com/pokt-network/pocket-network-protocol/tree/main/persistence)
## Goals / Deliverables
- [x] Implementation of the persistence module
- [x] Actor schema specification
- [x] Actor query specification
- [x] Dockerized infrastructure to run a `LocalNet` with the new persistence module implementation
- [ ] Deletion / deprecation of the PrePersistence module
- [ ] Loading of some sort of state from local disc when a LocalNet node starts up
- [ ] Unit Tests & with the 1st iteration of an accompanying unit test library
- [ ] Documentation
- [ ] Module specific README
- [ ] Module specific CHANGELOG
- [ ] Module specific code architecture (text and/or diagram)
- [ ] Instructions on how to run/test/debug the module implementation
- [ ] Global documentation / references updated
## Non-goals
- A comprehensive and complete "block store" mechanism
- Deployment of the node to a non-local environment
- Data integrity verification and guarantees
- Implementation/adoption of a Merkle Tree
- Implementation/adoption of a Key-Value Store
## Testing Methodology
LocalNet and Unit Tests. See [./docs/development/README.md](docs/development/README.md) for more details.
Owners: @andrewnguyen22 @Olshansk
|
1.0
|
[Persistence] V1 Persistence Foundation - # Objective
A basic SQL-based implementation of the Persistence module specification to enable the development of the rest of the Pocket Node.
# Origin Document
[Pocket protocol persistence specification](https://github.com/pokt-network/pocket-network-protocol/tree/main/persistence)
## Goals / Deliverables
- [x] Implementation of the persistence module
- [x] Actor schema specification
- [x] Actor query specification
- [x] Dockerized infrastructure to run a `LocalNet` with the new persistence module implementation
- [ ] Deletion / deprecation of the PrePersistence module
- [ ] Loading of some sort of state from local disc when a LocalNet node starts up
- [ ] Unit Tests & with the 1st iteration of an accompanying unit test library
- [ ] Documentation
- [ ] Module specific README
- [ ] Module specific CHANGELOG
- [ ] Module specific code architecture (text and/or diagram)
- [ ] Instructions on how to run/test/debug the module implementation
- [ ] Global documentation / references updated
## Non-goals
- A comprehensive and complete "block store" mechanism
- Deployment of the node to a non-local environment
- Data integrity verification and guarantees
- Implementation/adoption of a Merkle Tree
- Implementation/adoption of a Key-Value Store
## Testing Methodology
LocalNet and Unit Tests. See [./docs/development/README.md](docs/development/README.md) for more details.
Owners: @andrewnguyen22 @Olshansk
|
priority
|
persistence foundation objective a basic sql based implementation of the persistence module specification to enable the development of the rest of the pocket node origin document goals deliverables implementation of the persistence module actor schema specification actor query specification dockerized infrastructure to run a localnet with the new persistence module implementation deletion deprecation of the prepersistence module loading of some sort of state from local disc when a localnet node starts up unit tests with the iteration of an accompanying unit test library documentation module specific readme module specific changelog module specific code architecture text and or diagram instructions on how to run test debug the module implementation global documentation references updated non goals a comprehensive and complete block store mechanism deployment of the node to a non local environment data integrity verification and guarantees implementation adoption of a merkle tree implementation adoption of a key value store testing methodology localnet and unit tests see docs development readme md for more details owners olshansk
| 1
|
403,102
| 11,835,833,416
|
IssuesEvent
|
2020-03-23 11:21:26
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Check code action behaviour when there are module imports with alias
|
Area/Tooling Component/LanguageServer Points/0.5 Priority/High Type/Improvement
|
**Description:**
Need to check the code action behaviour when there are module imports with aliases.
**Affected Versions:**
v1.1.0 at least
|
1.0
|
Check code action behaviour when there are module imports with alias - **Description:**
Need to check the code action behaviour when there are module imports with aliases.
**Affected Versions:**
v1.1.0 at least
|
priority
|
check code action behaviour when there are module imports with alias description need to check the code action behaviour when there are module imports with aliases affected versions at least
| 1
|
32,609
| 2,756,447,385
|
IssuesEvent
|
2015-04-27 08:29:44
|
UnifiedViews/Core
|
https://api.github.com/repos/UnifiedViews/Core
|
closed
|
New Feature: FilesDataUnit debugging
|
priority: High severity: enhancement
|
This should include (and feel free to contribute):
- list and filter files
- download files (single, multiple, selected as zip, input list of files to download as zip)
- view as text
|
1.0
|
New Feature: FilesDataUnit debugging - This should include (and feel free to contribute):
- list and filter files
- download files (single, multiple, selected as zip, input list of files to download as zip)
- view as text
|
priority
|
new feature filesdataunit debugging this should include and feel free to contribute list and filter files download files single multiple selected as zip input list of files to download as zip view as text
| 1
|
821,058
| 30,802,455,924
|
IssuesEvent
|
2023-08-01 03:19:25
|
nimblehq/flutter-ic-khanh-thieu
|
https://api.github.com/repos/nimblehq/flutter-ic-khanh-thieu
|
closed
|
[Chore] Init the project
|
priority : high type : chore @0.1.0
|
## Why
- Init project by generating from [the Flutter template](https://github.com/nimblehq/flutter-templates).
## Acceptance Criteria
Generate a new project using the Flutter template with the following parameters:
- The package name is `co.nimblehq.khanhthieu.survey`
- The app's name is `Survey Flutter`
|
1.0
|
[Chore] Init the project - ## Why
- Init project by generating from [the Flutter template](https://github.com/nimblehq/flutter-templates).
## Acceptance Criteria
Generate a new project using the Flutter template with the following parameters:
- The package name is `co.nimblehq.khanhthieu.survey`
- The app's name is `Survey Flutter`
|
priority
|
init the project why init project by generating from acceptance criteria generate a new project using the flutter template with the following parameters the package name is co nimblehq khanhthieu survey the app s name is survey flutter
| 1
|
687,215
| 23,517,277,602
|
IssuesEvent
|
2022-08-18 23:16:14
|
trendscenter/coinstac
|
https://api.github.com/repos/trendscenter/coinstac
|
closed
|
UI: crash on view pipeline for vaults
|
bug high priority
|
## Task Description
Clicking `view pipeline` from a results widget on the home page generates this error
```
TypeError: Cannot read properties of undefined (reading 'computationWhitelist')
in a
in Query
in Apollo(a)
in Query
in Apollo(Apollo(a))
in Mutation
in Apollo(Apollo(Apollo(a)))
in Mutation
in Apollo(Apollo(Apollo(Apollo(a))))
in Query
in Apollo(Apollo(Apollo(Apollo(Apollo(a)))))
in ApolloConsumer
in withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a))))))
in DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a)))))))
in DragDropContext(DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a))))))))
in Connect(DragDropContext(DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a)))))))))
in Connect(Connect(DragDropContext(DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a))))))))))
in ForwardRef
in div
in a
in main
in div
in div
in Dashboard
in ApolloProvider
in ApolloProvider
in Unknown
in withRouter(Component)
in Unknown
in div
in a
in App
in Connect(App)
in RouterContext
in Router
in _R
in Provider```
|
1.0
|
UI: crash on view pipeline for vaults - ## Task Description
Clicking `view pipeline` from a results widget on the home page generates this error
```
TypeError: Cannot read properties of undefined (reading 'computationWhitelist')
in a
in Query
in Apollo(a)
in Query
in Apollo(Apollo(a))
in Mutation
in Apollo(Apollo(Apollo(a)))
in Mutation
in Apollo(Apollo(Apollo(Apollo(a))))
in Query
in Apollo(Apollo(Apollo(Apollo(Apollo(a)))))
in ApolloConsumer
in withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a))))))
in DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a)))))))
in DragDropContext(DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a))))))))
in Connect(DragDropContext(DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a)))))))))
in Connect(Connect(DragDropContext(DropTarget(withApollo(Apollo(Apollo(Apollo(Apollo(Apollo(a))))))))))
in ForwardRef
in div
in a
in main
in div
in div
in Dashboard
in ApolloProvider
in ApolloProvider
in Unknown
in withRouter(Component)
in Unknown
in div
in a
in App
in Connect(App)
in RouterContext
in Router
in _R
in Provider```
|
priority
|
ui crash on view pipeline for vaults task description clicking view pipeline from a results widget on the home page generates this error typeerror cannot read properties of undefined reading computationwhitelist in a in query in apollo a in query in apollo apollo a in mutation in apollo apollo apollo a in mutation in apollo apollo apollo apollo a in query in apollo apollo apollo apollo apollo a in apolloconsumer in withapollo apollo apollo apollo apollo apollo a in droptarget withapollo apollo apollo apollo apollo apollo a in dragdropcontext droptarget withapollo apollo apollo apollo apollo apollo a in connect dragdropcontext droptarget withapollo apollo apollo apollo apollo apollo a in connect connect dragdropcontext droptarget withapollo apollo apollo apollo apollo apollo a in forwardref in div in a in main in div in div in dashboard in apolloprovider in apolloprovider in unknown in withrouter component in unknown in div in a in app in connect app in routercontext in router in r in provider
| 1
|
145,162
| 5,559,760,831
|
IssuesEvent
|
2017-03-24 17:42:59
|
vmware/vic
|
https://api.github.com/repos/vmware/vic
|
closed
|
Portlets are broken (related to #4134)
|
area/ui priority/high
|
PR #4134 changed the key for the pretty name of a Docker container from `guestinfo.vice./common/name` to `common/name` and `guestinfo.vice./init/common/name` to `init/common/name`, which break the VCH and Container portlets.
**Expected behavior:**
Portlets should show up with correct information
**Actual behavior:**
Portlets are broken
|
1.0
|
Portlets are broken (related to #4134) - PR #4134 changed the key for the pretty name of a Docker container from `guestinfo.vice./common/name` to `common/name` and `guestinfo.vice./init/common/name` to `init/common/name`, which break the VCH and Container portlets.
**Expected behavior:**
Portlets should show up with correct information
**Actual behavior:**
Portlets are broken
|
priority
|
portlets are broken related to pr changed the key for the pretty name of a docker container from guestinfo vice common name to common name and guestinfo vice init common name to init common name which break the vch and container portlets expected behavior portlets should show up with correct information actual behavior portlets are broken
| 1
|
319,281
| 9,741,488,488
|
IssuesEvent
|
2019-06-02 09:02:38
|
communal-cloud/communal-cloud
|
https://api.github.com/repos/communal-cloud/communal-cloud
|
closed
|
Task data should be bundled together
|
api high priority
|
When we execute the task we should be able to get the necessary data bundled together
|
1.0
|
Task data should be bundled together - When we execute the task we should be able to get the necessary data bundled together
|
priority
|
task data should be bundled together when we execute the task we should be able to get the necessary data bundled together
| 1
|
352,614
| 10,544,152,525
|
IssuesEvent
|
2019-10-02 16:19:00
|
AY1920S1-CS2103T-T11-4/main
|
https://api.github.com/repos/AY1920S1-CS2103T-T11-4/main
|
closed
|
Update About Us Page
|
priority.High type.Task
|
From TP week 7 :
About Us page: This page is used for module admin purposes. Please follow the format closely or else our scripts will not be able to give credit for your work.
Replace info of SE-EDU developers with info of your team. Include a suitable photo as described here.
- Including the name/photo of the supervisor/lecturer is optional.
- The filename of the profile photo (even a placeholder image) should be doc/images/githbub_username_in_lower_case.png e.g. docs/images/damithc.png. If you photo is in jpg format, name the file as .png anyway.
- Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary.
|
1.0
|
Update About Us Page - From TP week 7 :
About Us page: This page is used for module admin purposes. Please follow the format closely or else our scripts will not be able to give credit for your work.
Replace info of SE-EDU developers with info of your team. Include a suitable photo as described here.
- Including the name/photo of the supervisor/lecturer is optional.
- The filename of the profile photo (even a placeholder image) should be doc/images/githbub_username_in_lower_case.png e.g. docs/images/damithc.png. If you photo is in jpg format, name the file as .png anyway.
- Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary.
|
priority
|
update about us page from tp week about us page this page is used for module admin purposes please follow the format closely or else our scripts will not be able to give credit for your work replace info of se edu developers with info of your team include a suitable photo as described here including the name photo of the supervisor lecturer is optional the filename of the profile photo even a placeholder image should be doc images githbub username in lower case png e g docs images damithc png if you photo is in jpg format name the file as png anyway indicate the different roles played and responsibilities held by each team member you can reassign these roles and responsibilities as explained in admin project scope later in the project if necessary
| 1
|
666,604
| 22,361,420,756
|
IssuesEvent
|
2022-06-15 20:56:27
|
FIREFIGHT-RELOADED/FIREFIGHT-RELOADED-src-sdk-2013
|
https://api.github.com/repos/FIREFIGHT-RELOADED/FIREFIGHT-RELOADED-src-sdk-2013
|
reopened
|
[LINUX] Various language file entries are not loading (perhaps due to byte size.)
|
bug high-priority linux slightly-fixed
|
Current issues:
- Death Notice Enemy Names are missing (in console and on the Death Notice itself)
- Wrong/large death notice icon (should be the skull icon like the Windows version, not the main focus but still a good mention.)
- Missing shop item title text
- Missing shop button text
- Reward item name showing up as (null)
I feel the reason why this isn't happening is that the byte size for all of these char values is too low or high for Linux. Will need to experiment with ways to fix this.




|
1.0
|
[LINUX] Various language file entries are not loading (perhaps due to byte size.) - Current issues:
- Death Notice Enemy Names are missing (in console and on the Death Notice itself)
- Wrong/large death notice icon (should be the skull icon like the Windows version, not the main focus but still a good mention.)
- Missing shop item title text
- Missing shop button text
- Reward item name showing up as (null)
I feel the reason why this isn't happening is that the byte size for all of these char values is too low or high for Linux. Will need to experiment with ways to fix this.




|
priority
|
various language file entries are not loading perhaps due to byte size current issues death notice enemy names are missing in console and on the death notice itself wrong large death notice icon should be the skull icon like the windows version not the main focus but still a good mention missing shop item title text missing shop button text reward item name showing up as null i feel the reason why this isn t happening is that the byte size for all of these char values is too low or high for linux will need to experiment with ways to fix this
| 1
|
757,478
| 26,514,191,064
|
IssuesEvent
|
2023-01-18 19:27:29
|
andrefdre/Dora_the_mug_finder_SAVI
|
https://api.github.com/repos/andrefdre/Dora_the_mug_finder_SAVI
|
closed
|
Create a ROS message to send the classifications back to visualize them
|
enhancement high_priority
|
Implement ROS messages to visualize item labels in the point cloud
|
1.0
|
Create a ROS message to send the classifications back to visualize them - Implement ROS messages to visualize item labels in the point cloud
|
priority
|
create a ros message to send the classifications back to visualize them implement ros messages to visualize item labels in the point cloud
| 1
|
499,400
| 14,446,560,659
|
IssuesEvent
|
2020-12-08 01:34:34
|
citi-onboarding/acervusej-onepage
|
https://api.github.com/repos/citi-onboarding/acervusej-onepage
|
opened
|
Create email sending logic on the front-end
|
high priority
|
Create email sending logic on the front-end
- [ ] Store the inputs of the contact section and store them in a state
- [ ] Make a function to send this data in the necessary fields of an email message, based on the backend
- [ ] The function will be called when the **send button** is clicked, and the fields will be cleared when the message is sent
**Links that can help**
- [Ascender Contact](https://github.com/citi-onboarding/Ascender-Jr/blob/develop/client/src/components/Contact/Contact.jsx)
- [Article](https://rangle.io/blog/simplifying-controlled-inputs-with-hooks/)
|
1.0
|
Create email sending logic on the front-end - Create email sending logic on the front-end
- [ ] Store the inputs of the contact section and store them in a state
- [ ] Make a function to send this data in the necessary fields of an email message, based on the backend
- [ ] The function will be called when the **send button** is clicked, and the fields will be cleared when the message is sent
**Links that can help**
- [Ascender Contact](https://github.com/citi-onboarding/Ascender-Jr/blob/develop/client/src/components/Contact/Contact.jsx)
- [Article](https://rangle.io/blog/simplifying-controlled-inputs-with-hooks/)
|
priority
|
create email sending logic on the front end create email sending logic on the front end store the inputs of the contact section and store them in a state make a function to send this data in the necessary fields of an email message based on the backend the function will be called when the send button is clicked and the fields will be cleared when the message is sent links that can help
| 1
|
452,436
| 13,050,348,880
|
IssuesEvent
|
2020-07-29 15:21:50
|
newrelic/newrelic-cli
|
https://api.github.com/repos/newrelic/newrelic-cli
|
closed
|
Finish publishing to Chocolatey
|
enhancement priority:high size:S
|
### Objective
We had some outstanding changes needed to be made to get published on Chocolatey.
### Acceptance Criteria
- New Relic CLI is available via Chocolatey
|
1.0
|
Finish publishing to Chocolatey - ### Objective
We had some outstanding changes needed to be made to get published on Chocolatey.
### Acceptance Criteria
- New Relic CLI is available via Chocolatey
|
priority
|
finish publishing to chocolatey objective we had some outstanding changes needed to be made to get published on chocolatey acceptance criteria new relic cli is available via chocolatey
| 1
|
746,197
| 26,019,772,318
|
IssuesEvent
|
2022-12-21 11:35:10
|
younginnovations/iatipublisher
|
https://api.github.com/repos/younginnovations/iatipublisher
|
closed
|
Issue found in Activity
|
type: bug priority: high
|
- [ ] Issue 1: Result Complete status is not working.
https://user-images.githubusercontent.com/78422663/208391299-eb04b8c2-2e7f-4707-a0ce-ff6d2c7305e1.mp4
- [x] Issue 2: In transaction value of humanitarian update automatically to true.
https://user-images.githubusercontent.com/78422663/208392864-71717a2e-eece-4454-a7bd-3e5ec5b9bd44.mp4
- [x] Issue 3: Period is missing on the Result detail page.

- [x] Issue 4: These buttons are not working properly.
https://user-images.githubusercontent.com/78422663/208395779-8487aca0-2dae-4d96-934f-a952ed838afa.mp4
- [x] Issue 5: Error bar is not dynamic.
https://user-images.githubusercontent.com/78422663/208402810-6b9045ac-69c2-4367-94e0-d023b4f2e302.mp4
- [x] Issue 6: Remove vocabulary repetition in country-budget-items

|
1.0
|
Issue found in Activity - - [ ] Issue 1: Result Complete status is not working.
https://user-images.githubusercontent.com/78422663/208391299-eb04b8c2-2e7f-4707-a0ce-ff6d2c7305e1.mp4
- [x] Issue 2: In transaction value of humanitarian update automatically to true.
https://user-images.githubusercontent.com/78422663/208392864-71717a2e-eece-4454-a7bd-3e5ec5b9bd44.mp4
- [x] Issue 3: Period is missing on the Result detail page.

- [x] Issue 4: These buttons are not working properly.
https://user-images.githubusercontent.com/78422663/208395779-8487aca0-2dae-4d96-934f-a952ed838afa.mp4
- [x] Issue 5: Error bar is not dynamic.
https://user-images.githubusercontent.com/78422663/208402810-6b9045ac-69c2-4367-94e0-d023b4f2e302.mp4
- [x] Issue 6: Remove vocabulary repetition in country-budget-items

|
priority
|
issue found in activity issue result complete status is not working issue in transaction value of humanitarian update automatically to true issue period is missing on the result detail page issue these buttons are not working properly issue error bar is not dynamic issue remove vocabulary repetition in country budget items
| 1
|
252,538
| 8,037,440,275
|
IssuesEvent
|
2018-07-30 12:39:59
|
smartdevicelink/sdl_core
|
https://api.github.com/repos/smartdevicelink/sdl_core
|
opened
|
HMILevel is not resumed to LIMITED for non-media applications
|
Bug Contributor priority 1: High
|
### Bug Report
HMILevel is not resumed to LIMITED for non-media applications
#### Preconditions:
1. Values configured in .ini file:
AppSavePersistentDataTimeout =10000;
ResumptionDelayBeforeIgn = 30;
ResumptionDelayAfterIgn = 30;
ApplicationResumingTimeout = 5000
2. Core and HMI are started.
3. Non-media application(COMMUNIATION) is registered and activated. -> HMI level = FULL
4. Go to menu Apps. HMI level of application becomes LIMITED
5. Stop WiFi connection.
##### Reproduction Steps
1. Press "Go To CD" -> HMI sends OnEventChanged(AUDIO_SOURCE, isActive: true) notification to SDL.
2. Activate WiFi connection.
3. Wait ApplicationResumingTimeout to expire.
##### Expected Behavior
SDL must resume HMILevel for non-media app.
##### Observed Behavior
HMI level becomes BACKGROUND, audioStreamingState : NOT_AUDIBLE
##### OS & Version Information
* OS/Version:
* SDL Core Version:
* Testing Against:
|
1.0
|
HMILevel is not resumed to LIMITED for non-media applications - ### Bug Report
HMILevel is not resumed to LIMITED for non-media applications
#### Preconditions:
1. Values configured in .ini file:
AppSavePersistentDataTimeout =10000;
ResumptionDelayBeforeIgn = 30;
ResumptionDelayAfterIgn = 30;
ApplicationResumingTimeout = 5000
2. Core and HMI are started.
3. Non-media application(COMMUNIATION) is registered and activated. -> HMI level = FULL
4. Go to menu Apps. HMI level of application becomes LIMITED
5. Stop WiFi connection.
##### Reproduction Steps
1. Press "Go To CD" -> HMI sends OnEventChanged(AUDIO_SOURCE, isActive: true) notification to SDL.
2. Activate WiFi connection.
3. Wait ApplicationResumingTimeout to expire.
##### Expected Behavior
SDL must resume HMILevel for non-media app.
##### Observed Behavior
HMI level becomes BACKGROUND, audioStreamingState : NOT_AUDIBLE
##### OS & Version Information
* OS/Version:
* SDL Core Version:
* Testing Against:
|
priority
|
hmilevel is not resumed to limited for non media applications bug report hmilevel is not resumed to limited for non media applications preconditions values configured in ini file appsavepersistentdatatimeout resumptiondelaybeforeign resumptiondelayafterign applicationresumingtimeout core and hmi are started non media application communiation is registered and activated hmi level full go to menu apps hmi level of application becomes limited stop wifi connection reproduction steps press go to cd hmi sends oneventchanged audio source isactive true notification to sdl activate wifi connection wait applicationresumingtimeout to expire expected behavior sdl must resume hmilevel for non media app observed behavior hmi level becomes background audiostreamingstate not audible os version information os version sdl core version testing against
| 1
|
737,467
| 25,517,714,354
|
IssuesEvent
|
2022-11-28 17:39:57
|
rstudio/gt
|
https://api.github.com/repos/rstudio/gt
|
closed
|
`data_color()` strips column attributes
|
Difficulty: [2] Intermediate Effort: [2] Medium Priority: [3] High Type: ★ Enhancement
|
Hello,
I have been using gt to generate tables showing data for a few different metrics. The columns showing these metrics must meet some quite specific requirements:
* The table text should show the metric's value
* The cell colours should be scaled to the metric's _percentile_ value (calculated within a much larger sample than is shown in the table)
I have been working around this by "hiding" the percentiles as metadata in the metric's attributes, and composing a colour scaling function with a percentile-getter to scale the colours appropriately. See the example below:
```r
library(tidyverse)
library(gt)
# Generate some example data where the percentile is calculated on a superset
example_data <-
tibble::tibble(group = sample(letters, 500L, replace = TRUE),
metric_value = rnorm(500L)) %>%
mutate(metric_percentile = percent_rank(metric_value)) %>%
filter(group == "a")
# Hide the percentiles as attributes
example_data_percentiled <-
example_data %>%
transmute(group,
metric_value_with_percentile = structure(
metric_value,
percentiles = metric_percentile,
class = c("vector_with_percentiles", class(metric_value))
))
gt(example_data_percentiled) %>%
data_color(
columns = vars(metric_value_with_percentile),
colors = compose(
scales::col_numeric(
palette = c("white", "green"),
domain = c(0, 1)),
function(x) attr(x, "percentiles"))
)
```
However, with v0.2.2, this no longer works. Specifically, due to the changes introduced by [this commit](https://github.com/rstudio/gt/commit/321bfb93f1910c458cb7e57e9de406b31c50b228):
```
# Before
data_vals <- data_tbl[[column]]
# After
data_vals <- data_tbl[[column]][rows]
```
By selecting the rows in this way, the attributes of any column are stripped out. This makes us unable to execute a ~hack~ workaround such as this one.
I don't think this specific change to gt should necessarily be reversed (this is an exceptionally niche use-case); although it would be nice to be able to map a columns colour to data in other columns.
|
1.0
|
`data_color()` strips column attributes - Hello,
I have been using gt to generate tables showing data for a few different metrics. The columns showing these metrics must meet some quite specific requirements:
* The table text should show the metric's value
* The cell colours should be scaled to the metric's _percentile_ value (calculated within a much larger sample than is shown in the table)
I have been working around this by "hiding" the percentiles as metadata in the metric's attributes, and composing a colour scaling function with a percentile-getter to scale the colours appropriately. See the example below:
```r
library(tidyverse)
library(gt)
# Generate some example data where the percentile is calculated on a superset
example_data <-
tibble::tibble(group = sample(letters, 500L, replace = TRUE),
metric_value = rnorm(500L)) %>%
mutate(metric_percentile = percent_rank(metric_value)) %>%
filter(group == "a")
# Hide the percentiles as attributes
example_data_percentiled <-
example_data %>%
transmute(group,
metric_value_with_percentile = structure(
metric_value,
percentiles = metric_percentile,
class = c("vector_with_percentiles", class(metric_value))
))
gt(example_data_percentiled) %>%
data_color(
columns = vars(metric_value_with_percentile),
colors = compose(
scales::col_numeric(
palette = c("white", "green"),
domain = c(0, 1)),
function(x) attr(x, "percentiles"))
)
```
However, with v0.2.2, this no longer works. Specifically, due to the changes introduced by [this commit](https://github.com/rstudio/gt/commit/321bfb93f1910c458cb7e57e9de406b31c50b228):
```
# Before
data_vals <- data_tbl[[column]]
# After
data_vals <- data_tbl[[column]][rows]
```
By selecting the rows in this way, the attributes of any column are stripped out. This makes us unable to execute a ~hack~ workaround such as this one.
I don't think this specific change to gt should necessarily be reversed (this is an exceptionally niche use-case); although it would be nice to be able to map a columns colour to data in other columns.
|
priority
|
data color strips column attributes hello i have been using gt to generate tables showing data for a few different metrics the columns showing these metrics must meet some quite specific requirements the table text should show the metric s value the cell colours should be scaled to the metric s percentile value calculated within a much larger sample than is shown in the table i have been working around this by hiding the percentiles as metadata in the metric s attributes and composing a colour scaling function with a percentile getter to scale the colours appropriately see the example below r library tidyverse library gt generate some example data where the percentile is calculated on a superset example data tibble tibble group sample letters replace true metric value rnorm mutate metric percentile percent rank metric value filter group a hide the percentiles as attributes example data percentiled example data transmute group metric value with percentile structure metric value percentiles metric percentile class c vector with percentiles class metric value gt example data percentiled data color columns vars metric value with percentile colors compose scales col numeric palette c white green domain c function x attr x percentiles however with this no longer works specifically due to the changes introduced by before data vals data tbl after data vals data tbl by selecting the rows in this way the attributes of any column are stripped out this makes us unable to execute a hack workaround such as this one i don t think this specific change to gt should necessarily be reversed this is an exceptionally niche use case although it would be nice to be able to map a columns colour to data in other columns
| 1
|
330,804
| 10,055,693,552
|
IssuesEvent
|
2019-07-22 07:15:24
|
projectacrn/acrn-hypervisor
|
https://api.github.com/repos/projectacrn/acrn-hypervisor
|
closed
|
Potential Memory Leaks Found(apl_sdc_stable).
|
priority: P2-High type: bug
|
During static analysis of the ACRN hypervisor source code, it was found that there is no sufficient tracking and release of allocated memory after it has been used, which would result in a memory growth over a period of time which can result in unexpected behavior.
/home/intel/ACRN/acrn-hypervisor/devicemodel/core/mem.c:216 | register_mem_int()
/home/intel/ACRN/acrn-hypervisor/devicemodel/hw/pci/xhci.c:4005 | pci_xhci_parse_opts()
|
1.0
|
Potential Memory Leaks Found(apl_sdc_stable). - During static analysis of the ACRN hypervisor source code, it was found that there is no sufficient tracking and release of allocated memory after it has been used, which would result in a memory growth over a period of time which can result in unexpected behavior.
/home/intel/ACRN/acrn-hypervisor/devicemodel/core/mem.c:216 | register_mem_int()
/home/intel/ACRN/acrn-hypervisor/devicemodel/hw/pci/xhci.c:4005 | pci_xhci_parse_opts()
|
priority
|
potential memory leaks found apl sdc stable during static analysis of the acrn hypervisor source code it was found that there is no sufficient tracking and release of allocated memory after it has been used which would result in a memory growth over a period of time which can result in unexpected behavior home intel acrn acrn hypervisor devicemodel core mem c register mem int home intel acrn acrn hypervisor devicemodel hw pci xhci c pci xhci parse opts
| 1
|
269,924
| 8,444,478,899
|
IssuesEvent
|
2018-10-18 18:33:37
|
DiscordDungeons/Bugs
|
https://api.github.com/repos/DiscordDungeons/Bugs
|
closed
|
Connections duplicating
|
Bot Bug High Priority
|
Whilst using `#!location` the Connections appear to duplicate area Buckleport.

|
1.0
|
Connections duplicating - Whilst using `#!location` the Connections appear to duplicate area Buckleport.

|
priority
|
connections duplicating whilst using location the connections appear to duplicate area buckleport
| 1
|
327,840
| 9,981,960,550
|
IssuesEvent
|
2019-07-10 08:46:41
|
highcharts/highcharts
|
https://api.github.com/repos/highcharts/highcharts
|
closed
|
[Accessibility] Unable to navigate to legend using keyboard if height not set - chart goes blank
|
Priority: High Type: Bug
|
#### Expected behaviour
Using the accessibility module, users should be able to navigate to the legend using the keyboard, without explicitly setting a chart height.
#### Actual behaviour
Using the accessibility module, users are not able to navigate to the legend using the keyboard, without explicitly setting a chart height. The chart ends up going blank, or looking buggy.
#### Live demo with steps to reproduce
Example chart using existing demo:
https://jsfiddle.net/3fsy67c1/
1. Using the keyboard, attempt to tab to the legend
2. Once you expect to tab to the legend, the chart goes blank
3. If you uncomment the height property in the CSS and run steps 1 and 2, it all works fine.
#### Product version
Highcharts v7.1.1 (using accessibility module)
#### Affected browser(s)
Chrome on osx
|
1.0
|
[Accessibility] Unable to navigate to legend using keyboard if height not set - chart goes blank - #### Expected behaviour
Using the accessibility module, users should be able to navigate to the legend using the keyboard, without explicitly setting a chart height.
#### Actual behaviour
Using the accessibility module, users are not able to navigate to the legend using the keyboard, without explicitly setting a chart height. The chart ends up going blank, or looking buggy.
#### Live demo with steps to reproduce
Example chart using existing demo:
https://jsfiddle.net/3fsy67c1/
1. Using the keyboard, attempt to tab to the legend
2. Once you expect to tab to the legend, the chart goes blank
3. If you uncomment the height property in the CSS and run steps 1 and 2, it all works fine.
#### Product version
Highcharts v7.1.1 (using accessibility module)
#### Affected browser(s)
Chrome on osx
|
priority
|
unable to navigate to legend using keyboard if height not set chart goes blank expected behaviour using the accessibility module users should be able to navigate to the legend using the keyboard without explicitly setting a chart height actual behaviour using the accessibility module users are not able to navigate to the legend using the keyboard without explicitly setting a chart height the chart ends up going blank or looking buggy live demo with steps to reproduce example chart using existing demo using the keyboard attempt to tab to the legend once you expect to tab to the legend the chart goes blank if you uncomment the height property in the css and run steps and it all works fine product version highcharts using accessibility module affected browser s chrome on osx
| 1
|
282,141
| 8,704,022,340
|
IssuesEvent
|
2018-12-05 18:13:08
|
AICrowd/AIcrowd
|
https://api.github.com/repos/AICrowd/AIcrowd
|
closed
|
Ability to embed youtube videos in the leaderboard media column
|
feature high priority
|
_From @spMohanty on April 04, 2018 15:49_
This will be required for the VizDoom 2018 challenge.
_Copied from original issue: crowdAI/crowdai#658_
|
1.0
|
Ability to embed youtube videos in the leaderboard media column - _From @spMohanty on April 04, 2018 15:49_
This will be required for the VizDoom 2018 challenge.
_Copied from original issue: crowdAI/crowdai#658_
|
priority
|
ability to embed youtube videos in the leaderboard media column from spmohanty on april this will be required for the vizdoom challenge copied from original issue crowdai crowdai
| 1
|
145,338
| 5,565,091,515
|
IssuesEvent
|
2017-03-26 11:01:13
|
carcasanchez/TheLinkedProject
|
https://api.github.com/repos/carcasanchez/TheLinkedProject
|
opened
|
Gui Makes crash when compiling in release
|
CRITICAL High priority
|
The crash point is different each time. Probably a random memory allocation
|
1.0
|
Gui Makes crash when compiling in release - The crash point is different each time. Probably a random memory allocation
|
priority
|
gui makes crash when compiling in release the crash point is different each time probably a random memory allocation
| 1
|
383,618
| 11,360,211,951
|
IssuesEvent
|
2020-01-26 04:24:30
|
TannerDisney/DisneyCafe-Portfolio
|
https://api.github.com/repos/TannerDisney/DisneyCafe-Portfolio
|
closed
|
Create table for user orders
|
Back-End Database High Priority
|
When a user gets his order of food we need to be able to save a certification number and be able to link orders to users to go through their order history.
Order
--
* Order Id
* Order Confirmation Number
* Order Items?
`need to be able to save all items that were ordered into something like an object?`
|
1.0
|
Create table for user orders - When a user gets his order of food we need to be able to save a certification number and be able to link orders to users to go through their order history.
Order
--
* Order Id
* Order Confirmation Number
* Order Items?
`need to be able to save all items that were ordered into something like an object?`
|
priority
|
create table for user orders when a user gets his order of food we need to be able to save a certification number and be able to link orders to users to go through their order history order order id order confirmation number order items need to be able to save all items that were ordered into something like an object
| 1
|
638,089
| 20,712,559,983
|
IssuesEvent
|
2022-03-12 05:24:40
|
AY2122S2-CS2103-F11-3/tp
|
https://api.github.com/repos/AY2122S2-CS2103-F11-3/tp
|
closed
|
As a new user I can add the hair type of a customer
|
type.Story priority.High
|
... so that I can choose the correct products when treating their hair.
|
1.0
|
As a new user I can add the hair type of a customer - ... so that I can choose the correct products when treating their hair.
|
priority
|
as a new user i can add the hair type of a customer so that i can choose the correct products when treating their hair
| 1
|
755,349
| 26,426,108,797
|
IssuesEvent
|
2023-01-14 07:07:23
|
LiteLDev/LiteLoaderBDS
|
https://api.github.com/repos/LiteLDev/LiteLoaderBDS
|
closed
|
CommandSelector 缓冲区溢出漏洞
|
type: bug module: LiteLoader priority: high
|
### 异常模块
LiteLoader (本体)
### 操作系统
Windows Server 2022
### LiteLoader 版本
2.9.1
### BDS 版本
1.19.51
### 发生了什么?
在编写原生插件时发现使用commandselector的类有概率会发生如下错误:

分析后发现:
是由于commandselector结构大小分配错误,析构时调用bds内部的函数移除了错误大小的内存导致的,析构方式未知,进一步发现实际大小应为200,(LL内部为192),如果commandselector处于类的末端就有可能导致这个问题:

由于头文件不可修改,希望尽快修复这个问题,目前的解决方案只能是栈上的后边添加大小至少为8的变量作为缓冲区。
### 复现此问题的步骤
1.随便一个函数或类里构造一个commandselector对象,且后面无其他局部变量或类成员
2.当函数执行完或类析构时,会造成堆栈溢出异常或者系统中断
### 有关的日志/输出
_No response_
### 插件列表
_No response_
|
1.0
|
CommandSelector 缓冲区溢出漏洞 - ### 异常模块
LiteLoader (本体)
### 操作系统
Windows Server 2022
### LiteLoader 版本
2.9.1
### BDS 版本
1.19.51
### 发生了什么?
在编写原生插件时发现使用commandselector的类有概率会发生如下错误:

分析后发现:
是由于commandselector结构大小分配错误,析构时调用bds内部的函数移除了错误大小的内存导致的,析构方式未知,进一步发现实际大小应为200,(LL内部为192),如果commandselector处于类的末端就有可能导致这个问题:

由于头文件不可修改,希望尽快修复这个问题,目前的解决方案只能是栈上的后边添加大小至少为8的变量作为缓冲区。
### 复现此问题的步骤
1.随便一个函数或类里构造一个commandselector对象,且后面无其他局部变量或类成员
2.当函数执行完或类析构时,会造成堆栈溢出异常或者系统中断
### 有关的日志/输出
_No response_
### 插件列表
_No response_
|
priority
|
commandselector 缓冲区溢出漏洞 异常模块 liteloader 本体 操作系统 windows server liteloader 版本 bds 版本 发生了什么 在编写原生插件时发现使用commandselector的类有概率会发生如下错误: 分析后发现: 是由于commandselector结构大小分配错误,析构时调用bds内部的函数移除了错误大小的内存导致的,析构方式未知, ,( ),如果commandselector处于类的末端就有可能导致这个问题: 由于头文件不可修改,希望尽快修复这个问题, 。 复现此问题的步骤 随便一个函数或类里构造一个commandselector对象,且后面无其他局部变量或类成员 当函数执行完或类析构时,会造成堆栈溢出异常或者系统中断 有关的日志 输出 no response 插件列表 no response
| 1
|
171,284
| 6,485,905,000
|
IssuesEvent
|
2017-08-19 14:52:54
|
enforcer574/smashclub
|
https://api.github.com/repos/enforcer574/smashclub
|
closed
|
New users always have 4 placement matches remaining
|
bug Priority 2 (High)
|
When a new user is created in the middle of a season, they always have 4 placement matches remaining, regardless of how many are required that season.
This can result in a negative number of completed placement matches appearing on the profile page if the current season requires less than 4 placements.
|
1.0
|
New users always have 4 placement matches remaining - When a new user is created in the middle of a season, they always have 4 placement matches remaining, regardless of how many are required that season.
This can result in a negative number of completed placement matches appearing on the profile page if the current season requires less than 4 placements.
|
priority
|
new users always have placement matches remaining when a new user is created in the middle of a season they always have placement matches remaining regardless of how many are required that season this can result in a negative number of completed placement matches appearing on the profile page if the current season requires less than placements
| 1
|
307,826
| 9,422,843,599
|
IssuesEvent
|
2019-04-11 10:18:05
|
chameleon-system/chameleon-system
|
https://api.github.com/repos/chameleon-system/chameleon-system
|
opened
|
Edit-on-click fields are not validated on save
|
Priority: High Type: Bug
|
**Describe the bug**
Fields that are configured as edit-on-click fields can be saved no matter what. There are no validation errors if invalid values are entered.
**Affected version(s)**
All versions.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure any table field so that it is required.
2. Open a suitable record and try to save it with this field left empty.
3. This will prompt an error.
4. Configure this field as edit-on-click now.
5. Open the record again and edit the value.
6. Save this field with an empty value.
7. The value will be saved.
**Expected behavior**
The same validation errors that occur for the default case should also occur in the edit-on-click window.
Setting high priority as this can cause corrupt data. Might be switched to normal priority if you decide that this feature is used too rarely to be of that much concern.
|
1.0
|
Edit-on-click fields are not validated on save - **Describe the bug**
Fields that are configured as edit-on-click fields can be saved no matter what. There are no validation errors if invalid values are entered.
**Affected version(s)**
All versions.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure any table field so that it is required.
2. Open a suitable record and try to save it with this field left empty.
3. This will prompt an error.
4. Configure this field as edit-on-click now.
5. Open the record again and edit the value.
6. Save this field with an empty value.
7. The value will be saved.
**Expected behavior**
The same validation errors that occur for the default case should also occur in the edit-on-click window.
Setting high priority as this can cause corrupt data. Might be switched to normal priority if you decide that this feature is used too rarely to be of that much concern.
|
priority
|
edit on click fields are not validated on save describe the bug fields that are configured as edit on click fields can be saved no matter what there are no validation errors if invalid values are entered affected version s all versions to reproduce steps to reproduce the behavior configure any table field so that it is required open a suitable record and try to save it with this field left empty this will prompt an error configure this field as edit on click now open the record again and edit the value save this field with an empty value the value will be saved expected behavior the same validation errors that occur for the default case should also occur in the edit on click window setting high priority as this can cause corrupt data might be switched to normal priority if you decide that this feature is used too rarely to be of that much concern
| 1
|
174,394
| 6,539,713,334
|
IssuesEvent
|
2017-09-01 12:38:37
|
DOAJ/doaj
|
https://api.github.com/repos/DOAJ/doaj
|
closed
|
1687-4161, 1687-417X journal name change
|
data at risk feedback high priority
|
The ISSNs used in the original application appear to be wrong, and refer to an earlier name of the journal. I believe that the ISSN is used as the key in the database, so what would happen if we changed both of them???
The journal changed name in 2007, according to the ISSN database, but it was added to DOAJ with the old ISSNs, not the new one that refers to the new name.
|
1.0
|
1687-4161, 1687-417X journal name change - The ISSNs used in the original application appear to be wrong, and refer to an earlier name of the journal. I believe that the ISSN is used as the key in the database, so what would happen if we changed both of them???
The journal changed name in 2007, according to the ISSN database, but it was added to DOAJ with the old ISSNs, not the new one that refers to the new name.
|
priority
|
journal name change the issns used in the original application appear to be wrong and refer to an earlier name of the journal i believe that the issn is used as the key in the database so what would happen if we changed both of them the journal changed name in according to the issn database but it was added to doaj with the old issns not the new one that refers to the new name
| 1
|
429,465
| 12,424,935,079
|
IssuesEvent
|
2020-05-24 14:06:55
|
batidibek/SWE_574_Group_2
|
https://api.github.com/repos/batidibek/SWE_574_Group_2
|
opened
|
Deployment: Value error when trign to upload an image
|
Priority: High Type: Bug
|
When there is an image field in the post type, an error occurs upon clicking send. Please, see attached file for error log. It appears that the error comes up in Boto3 package. I have checked the deployed Settings.py entries for AWS, Boto3 and secret keys but they all seem to be in order.
Please, see attached, to see the error log.
[Image save error.pdf](https://github.com/batidibek/SWE_574_Group_2/files/4673658/Image.save.error.pdf)
|
1.0
|
Deployment: Value error when trign to upload an image - When there is an image field in the post type, an error occurs upon clicking send. Please, see attached file for error log. It appears that the error comes up in Boto3 package. I have checked the deployed Settings.py entries for AWS, Boto3 and secret keys but they all seem to be in order.
Please, see attached, to see the error log.
[Image save error.pdf](https://github.com/batidibek/SWE_574_Group_2/files/4673658/Image.save.error.pdf)
|
priority
|
deployment value error when trign to upload an image when there is an image field in the post type an error occurs upon clicking send please see attached file for error log it appears that the error comes up in package i have checked the deployed settings py entries for aws and secret keys but they all seem to be in order please see attached to see the error log
| 1
|
303,039
| 9,301,561,289
|
IssuesEvent
|
2019-03-23 23:16:59
|
richmondrcmp/mobileapp
|
https://api.github.com/repos/richmondrcmp/mobileapp
|
opened
|
Main menu location does not update when app resumes
|
Bug High Priority
|
I leave location A (Vancouver) with the app running and return to the phone's main menu and then arrive at location B (Richmond) and resume the app location A (Vancouver) still appears:

until I press and cancel a popup menu or invoke the “Where Am I?” option (e.g. by pressing “Vancouver, BC” from the weather/location banner):

and then return to the main menu:

**Note: If I resume the app and stay on the main menu the location will not updated.**
|
1.0
|
Main menu location does not update when app resumes - I leave location A (Vancouver) with the app running and return to the phone's main menu and then arrive at location B (Richmond) and resume the app location A (Vancouver) still appears:

until I press and cancel a popup menu or invoke the “Where Am I?” option (e.g. by pressing “Vancouver, BC” from the weather/location banner):

and then return to the main menu:

**Note: If I resume the app and stay on the main menu the location will not updated.**
|
priority
|
main menu location does not update when app resumes i leave location a vancouver with the app running and return to the phone s main menu and then arrive at location b richmond and resume the app location a vancouver still appears until i press and cancel a popup menu or invoke the “where am i ” option e g by pressing “vancouver bc” from the weather location banner and then return to the main menu note if i resume the app and stay on the main menu the location will not updated
| 1
|
206,651
| 7,114,554,672
|
IssuesEvent
|
2018-01-18 01:23:04
|
tgockel/zookeeper-cpp
|
https://api.github.com/repos/tgockel/zookeeper-cpp
|
closed
|
Calling server::shutdown will always crash
|
bug lib/server priority/high
|
If `server::shutdown` is called with `true`, the `std::thread` will be joined again in `~server`, which will always throw an `std::invalid_argument` leading to an abort, since `~server` is `noexcept`.
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff36993aa in __GI_abort () at abort.c:89
#2 0x00007ffff3fd9095 in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007ffff3fd6c86 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff3fd5b89 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff3fd6578 in __gxx_personality_v0 () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007ffff4b9a9bf in __libunwind_Unwind_RaiseException () from /usr/lib/x86_64-linux-gnu/libunwind.so.8
#7 0x00007ffff3fd6f07 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#8 0x00007ffff40030be in std::__throw_system_error(int) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#9 0x00007ffff40033cc in std::thread::join() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#10 0x00007ffff6e7ee09 in zk::server::server::shutdown (this=<optimized out>, wait_for_stop=<optimized out>) at //root/zookeeper-cpp/src/zk/server/server.cpp:32
#11 0x00007ffff6e7ee1e in zk::server::server::~server (this=0x1002307c0, __in_chrg=<optimized out>) at //root/zookeeper-cpp/src/zk/server/server.cpp:19
...
```
|
1.0
|
Calling server::shutdown will always crash - If `server::shutdown` is called with `true`, the `std::thread` will be joined again in `~server`, which will always throw an `std::invalid_argument` leading to an abort, since `~server` is `noexcept`.
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff36993aa in __GI_abort () at abort.c:89
#2 0x00007ffff3fd9095 in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007ffff3fd6c86 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff3fd5b89 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff3fd6578 in __gxx_personality_v0 () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007ffff4b9a9bf in __libunwind_Unwind_RaiseException () from /usr/lib/x86_64-linux-gnu/libunwind.so.8
#7 0x00007ffff3fd6f07 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#8 0x00007ffff40030be in std::__throw_system_error(int) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#9 0x00007ffff40033cc in std::thread::join() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#10 0x00007ffff6e7ee09 in zk::server::server::shutdown (this=<optimized out>, wait_for_stop=<optimized out>) at //root/zookeeper-cpp/src/zk/server/server.cpp:32
#11 0x00007ffff6e7ee1e in zk::server::server::~server (this=0x1002307c0, __in_chrg=<optimized out>) at //root/zookeeper-cpp/src/zk/server/server.cpp:19
...
```
|
priority
|
calling server shutdown will always crash if server shutdown is called with true the std thread will be joined again in server which will always throw an std invalid argument leading to an abort since server is noexcept gi raise sig sig entry at sysdeps unix sysv linux raise c in gi abort at abort c in gnu cxx verbose terminate handler from usr lib linux gnu libstdc so in from usr lib linux gnu libstdc so in from usr lib linux gnu libstdc so in gxx personality from usr lib linux gnu libstdc so in libunwind unwind raiseexception from usr lib linux gnu libunwind so in cxa throw from usr lib linux gnu libstdc so in std throw system error int from usr lib linux gnu libstdc so in std thread join from usr lib linux gnu libstdc so in zk server server shutdown this wait for stop at root zookeeper cpp src zk server server cpp in zk server server server this in chrg at root zookeeper cpp src zk server server cpp
| 1
|
131,160
| 5,144,184,749
|
IssuesEvent
|
2017-01-12 17:55:05
|
Esri/solutions-webappbuilder-widgets
|
https://api.github.com/repos/Esri/solutions-webappbuilder-widgets
|
closed
|
Distance and Direction - Creating lines based on distance and bearing always creates a line longer then length parameter value
|
3 - Verify B - Bug Distance and Direction G - Defense Team High Priority Showstopper
|
### Widget
Distance and Direction
### Version of widget
build 11 Jan 2017 09:51
### Bug or Enhancement
When creating lines from distance and bearing the created line is always longer then what the user specified.
In the screenshot below, I set the length to 1000 km, but the labeled line states line is longer then 1000 km. Also tested all the other length units and behavior is the same for all units.

Another verification that the length is longer, is that I created a circle with 1000 km radius centered at 1,1 and the arrow head of the line extends past the circle

### Repo Steps or Enhancement details
1. From line tool, select "Distance and Bearing"
2. Set Start Point to 1, 1 and hit return.
3. Set Length to 1000 kilometers.
4. Set Angle to 90 degrees and hit return.
The label on the created line is longer then 1000 kilometers. Also verified with the Measure widget.
|
1.0
|
Distance and Direction - Creating lines based on distance and bearing always creates a line longer then length parameter value - ### Widget
Distance and Direction
### Version of widget
build 11 Jan 2017 09:51
### Bug or Enhancement
When creating lines from distance and bearing the created line is always longer then what the user specified.
In the screenshot below, I set the length to 1000 km, but the labeled line states line is longer then 1000 km. Also tested all the other length units and behavior is the same for all units.

Another verification that the length is longer, is that I created a circle with 1000 km radius centered at 1,1 and the arrow head of the line extends past the circle

### Repo Steps or Enhancement details
1. From line tool, select "Distance and Bearing"
2. Set Start Point to 1, 1 and hit return.
3. Set Length to 1000 kilometers.
4. Set Angle to 90 degrees and hit return.
The label on the created line is longer then 1000 kilometers. Also verified with the Measure widget.
|
priority
|
distance and direction creating lines based on distance and bearing always creates a line longer then length parameter value widget distance and direction version of widget build jan bug or enhancement when creating lines from distance and bearing the created line is always longer then what the user specified in the screenshot below i set the length to km but the labeled line states line is longer then km also tested all the other length units and behavior is the same for all units another verification that the length is longer is that i created a circle with km radius centered at and the arrow head of the line extends past the circle repo steps or enhancement details from line tool select distance and bearing set start point to and hit return set length to kilometers set angle to degrees and hit return the label on the created line is longer then kilometers also verified with the measure widget
| 1
|
506,683
| 14,671,000,522
|
IssuesEvent
|
2020-12-30 06:55:15
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.nbcnews.com - see bug description
|
browser-focus-geckoview engine-gecko ml-needsdiagnosis-false ml-probability-high priority-important
|
<!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64587 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.nbcnews.com/news/us-news/police-nashville-blast-credit-divine-intervention-say-rv-played-downtown-n1252394
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Admiral adblock detector
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.nbcnews.com - see bug description - <!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64587 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.nbcnews.com/news/us-news/police-nashville-blast-credit-divine-intervention-say-rv-played-downtown-n1252394
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Admiral adblock detector
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
priority
|
see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description admiral adblock detector steps to reproduce browser configuration none from with ❤️
| 1
|
768,119
| 26,953,978,628
|
IssuesEvent
|
2023-02-08 13:44:12
|
ditrit/leto-modelizer
|
https://api.github.com/repos/ditrit/leto-modelizer
|
closed
|
Display more information for one attribute
|
User Story Priority: High
|
## Description
**As a** user, when I want to edit attributes of a component,
**I want** to have a readable name, a description and an url of its documentation.
**so that** I can better understand the purpose of this attribute.
|
1.0
|
Display more information for one attribute - ## Description
**As a** user, when I want to edit attributes of a component,
**I want** to have a readable name, a description and an url of its documentation.
**so that** I can better understand the purpose of this attribute.
|
priority
|
display more information for one attribute description as a user when i want to edit attributes of a component i want to have a readable name a description and an url of its documentation so that i can better understand the purpose of this attribute
| 1
|
400,564
| 11,777,012,528
|
IssuesEvent
|
2020-03-16 14:11:27
|
wso2/kubernetes-apim
|
https://api.github.com/repos/wso2/kubernetes-apim
|
closed
|
OpenShift deployment
|
Priority/Highest Type/Question
|
Is it safe to assume that the new Kubernetes resources could be deployed to OpenShift by substituting the equivalent `oc` CLI command for each `kubectl` CLI command?
Or would *Helm* be the preferred approach for OpenShift?
Thanks!
|
1.0
|
OpenShift deployment - Is it safe to assume that the new Kubernetes resources could be deployed to OpenShift by substituting the equivalent `oc` CLI command for each `kubectl` CLI command?
Or would *Helm* be the preferred approach for OpenShift?
Thanks!
|
priority
|
openshift deployment is it safe to assume that the new kubernetes resources could be deployed to openshift by substituting the equivalent oc cli command for each kubectl cli command or would helm be the preferred approach for openshift thanks
| 1
|
319,092
| 9,739,108,827
|
IssuesEvent
|
2019-06-01 08:12:39
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[Spell] [Inscription: Darkmoon Card of Destruction]
|
Fixed Confirmed Fixed in Dev Priority-High Profession
|
**Links:**
https://www.wowhead.com/spell=86615/darkmoon-card-of-destruction
https://www.wowhead.com/item=61987/darkmoon-card-of-destruction
**What is happening:**
creates this item darkmoon-card-of-destruction
**What should happen:**
shoud create a rnd card of a deck
[Earthquake Deck]
https://www.wowhead.com/item=62046/earthquake-deck
[Hurricane Deck]
https://www.wowhead.com/item=62045/hurricane-deck
[Tsunami Deck]
https://www.wowhead.com/item=62044/tsunami-deck
[Volcanic Deck]
https://www.wowhead.com/item=62021/volcanic-deck
|
1.0
|
[Spell] [Inscription: Darkmoon Card of Destruction] - **Links:**
https://www.wowhead.com/spell=86615/darkmoon-card-of-destruction
https://www.wowhead.com/item=61987/darkmoon-card-of-destruction
**What is happening:**
creates this item darkmoon-card-of-destruction
**What should happen:**
shoud create a rnd card of a deck
[Earthquake Deck]
https://www.wowhead.com/item=62046/earthquake-deck
[Hurricane Deck]
https://www.wowhead.com/item=62045/hurricane-deck
[Tsunami Deck]
https://www.wowhead.com/item=62044/tsunami-deck
[Volcanic Deck]
https://www.wowhead.com/item=62021/volcanic-deck
|
priority
|
links what is happening creates this item darkmoon card of destruction what should happen shoud create a rnd card of a deck
| 1
|
777,978
| 27,299,881,850
|
IssuesEvent
|
2023-02-24 00:24:45
|
NCC-CNC/wheretowork
|
https://api.github.com/repos/NCC-CNC/wheretowork
|
closed
|
Total area budget error
|
bug high priority
|
It seems like the `Total area budget` option causes an error now.
As an example, using the built in "South Western Ontario" project and this configuration file:
[s3_configs.zip](https://github.com/NCC-CNC/wheretowork/files/10769770/s3_configs.zip)
and toggling on `Total area budget` like this:

causes this uninformative error:

I've tried with different configurations/parameters and projects and get the same result. Something seems off with the `Total area budget` option.
|
1.0
|
Total area budget error - It seems like the `Total area budget` option causes an error now.
As an example, using the built in "South Western Ontario" project and this configuration file:
[s3_configs.zip](https://github.com/NCC-CNC/wheretowork/files/10769770/s3_configs.zip)
and toggling on `Total area budget` like this:

causes this uninformative error:

I've tried with different configurations/parameters and projects and get the same result. Something seems off with the `Total area budget` option.
|
priority
|
total area budget error it seems like the total area budget option causes an error now as an example using the built in south western ontario project and this configuration file and toggling on total area budget like this causes this uninformative error i ve tried with different configurations parameters and projects and get the same result something seems off with the total area budget option
| 1
|
392,968
| 11,598,358,911
|
IssuesEvent
|
2020-02-24 22:54:54
|
TannerDisney/DisneyCafe-Portfolio
|
https://api.github.com/repos/TannerDisney/DisneyCafe-Portfolio
|
closed
|
Remove Database for Ingredients
|
Back-End Database High Priority
|
Need to remove unused database table because we are going to only sell the desserts and not sell them by ingredient
|
1.0
|
Remove Database for Ingredients - Need to remove unused database table because we are going to only sell the desserts and not sell them by ingredient
|
priority
|
remove database for ingredients need to remove unused database table because we are going to only sell the desserts and not sell them by ingredient
| 1
|
621,301
| 19,582,633,819
|
IssuesEvent
|
2022-01-05 00:05:03
|
aws-solutions/aws-media-insights-content-localization
|
https://api.github.com/repos/aws-solutions/aws-media-insights-content-localization
|
closed
|
Add SRT Download Option to Subtitles Tab
|
enhancement point: 3 priority: high
|
**Is your feature request related to a problem? Please describe.**
Currently when I transcribe and translate video content, the GUI application for subtitles option only exposes the option to download VTT format whereas the translation offers SRT and VTT.
This poses a blocker to utilizing for native language caption/subtitle files for platforms like Facebook, Instagram, and LinkedIn that only support SRT.
<img width="1435" alt="Screen Shot 2021-12-09 at 6 16 02 PM" src="https://user-images.githubusercontent.com/95241370/145506615-4d9a0931-be05-49b6-aaf2-cd222f7409d9.png">
<img width="1419" alt="Screen Shot 2021-12-09 at 6 16 11 PM" src="https://user-images.githubusercontent.com/95241370/145506644-237f0df8-e9bd-425c-a8dc-afcb0c0a8314.png">
**Describe the feature you'd like**
Expose the SRT file in the "Subtitle tab" in the same way it is exposed in the "Translation" tab.
**Additional context**
I have confirmed that the SRT is automatically created and placed in the S3 bucket as part of the workflow. It just appears to not be exposed in the UI.
<img width="738" alt="Screen Shot 2021-12-09 at 6 20 45 PM" src="https://user-images.githubusercontent.com/95241370/145506753-dab3a1b5-10d0-4ddf-ad94-472ec1efb1ff.png">
|
1.0
|
Add SRT Download Option to Subtitles Tab - **Is your feature request related to a problem? Please describe.**
Currently when I transcribe and translate video content, the GUI application for subtitles option only exposes the option to download VTT format whereas the translation offers SRT and VTT.
This poses a blocker to utilizing for native language caption/subtitle files for platforms like Facebook, Instagram, and LinkedIn that only support SRT.
<img width="1435" alt="Screen Shot 2021-12-09 at 6 16 02 PM" src="https://user-images.githubusercontent.com/95241370/145506615-4d9a0931-be05-49b6-aaf2-cd222f7409d9.png">
<img width="1419" alt="Screen Shot 2021-12-09 at 6 16 11 PM" src="https://user-images.githubusercontent.com/95241370/145506644-237f0df8-e9bd-425c-a8dc-afcb0c0a8314.png">
**Describe the feature you'd like**
Expose the SRT file in the "Subtitle tab" in the same way it is exposed in the "Translation" tab.
**Additional context**
I have confirmed that the SRT is automatically created and placed in the S3 bucket as part of the workflow. It just appears to not be exposed in the UI.
<img width="738" alt="Screen Shot 2021-12-09 at 6 20 45 PM" src="https://user-images.githubusercontent.com/95241370/145506753-dab3a1b5-10d0-4ddf-ad94-472ec1efb1ff.png">
|
priority
|
add srt download option to subtitles tab is your feature request related to a problem please describe currently when i transcribe and translate video content the gui application for subtitles option only exposes the option to download vtt format whereas the translation offers srt and vtt this poses a blocker to utilizing for native language caption subtitle files for platforms like facebook instagram and linkedin that only support srt img width alt screen shot at pm src img width alt screen shot at pm src describe the feature you d like expose the srt file in the subtitle tab in the same way it is exposed in the translation tab additional context i have confirmed that the srt is automatically created and placed in the bucket as part of the workflow it just appears to not be exposed in the ui img width alt screen shot at pm src
| 1
|
450,750
| 13,018,796,100
|
IssuesEvent
|
2020-07-26 19:09:37
|
ION28/BLUESPAWN
|
https://api.github.com/repos/ION28/BLUESPAWN
|
closed
|
Include last write time for registry keys in registry detection
|
difficulty/easy lang/c++ mode/other module/configuration module/logging priority/high type/enhancement
|
Value can be obtained in regqueryinfokey
|
1.0
|
Include last write time for registry keys in registry detection - Value can be obtained in regqueryinfokey
|
priority
|
include last write time for registry keys in registry detection value can be obtained in regqueryinfokey
| 1
|
73,767
| 3,421,065,911
|
IssuesEvent
|
2015-12-08 17:12:05
|
VertNet/georefcalculator
|
https://api.github.com/repos/VertNet/georefcalculator
|
closed
|
BUG: Promote does not promote
|
bug high priority
|
Select Calculation Type "Coordinates Only"
Select Location Type "Distance along orthogonal directions"
Select Coordinate System "degrees minutes seconds"
Select Latitude 35d 22m 30s N
Select Longitude 119d 0m 0s W
Select Datum NAD27
Select NS Offset distance 0.11498 S
Select EW Offset distance 1.00379 W
Select Distance Units "mi"
Click Calculate
Expected: Decimal Latitude 35.3733321 Decimal Longitude -119.0177772
Observed: Decimal Latitude 35.3733321 Decimal Longitude -119.0177772
So far, so good.
Click Promote
Expected: Latitude 35d 22m 24s N Longitude 119d 1m 4s W
Observed: No change in Latitude (35d 22m 30s N) and Longitude (119d 0m 0s W)
|
1.0
|
BUG: Promote does not promote - Select Calculation Type "Coordinates Only"
Select Location Type "Distance along orthogonal directions"
Select Coordinate System "degrees minutes seconds"
Select Latitude 35d 22m 30s N
Select Longitude 119d 0m 0s W
Select Datum NAD27
Select NS Offset distance 0.11498 S
Select EW Offset distance 1.00379 W
Select Distance Units "mi"
Click Calculate
Expected: Decimal Latitude 35.3733321 Decimal Longitude -119.0177772
Observed: Decimal Latitude 35.3733321 Decimal Longitude -119.0177772
So far, so good.
Click Promote
Expected: Latitude 35d 22m 24s N Longitude 119d 1m 4s W
Observed: No change in Latitude (35d 22m 30s N) and Longitude (119d 0m 0s W)
|
priority
|
bug promote does not promote select calculation type coordinates only select location type distance along orthogonal directions select coordinate system degrees minutes seconds select latitude n select longitude w select datum select ns offset distance s select ew offset distance w select distance units mi click calculate expected decimal latitude decimal longitude observed decimal latitude decimal longitude so far so good click promote expected latitude n longitude w observed no change in latitude n and longitude w
| 1
|
587,020
| 17,602,311,114
|
IssuesEvent
|
2021-08-17 13:18:35
|
dmwm/CRABServer
|
https://api.github.com/repos/dmwm/CRABServer
|
closed
|
add --deep flag for list_dataset_replicas call
|
Status: Done Priority: High
|
from https://hypernews.cern.ch/HyperNews/CMS/get/computing-tools/6102.html thread:
```
One should never rely on list-dataset-replicas without the --deep flag:
[ewv@lxplus802 rucio]$ rucio list-dataset-replicas --deep cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
DATASET: cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
+-----------------+---------+---------+
\| RSE \| FOUND \| TOTAL \|
\|-----------------+---------+---------\|
\| T2_CH_CERN \| 3 \| 3 \|
\| T0_CH_CERN_Disk \| 3 \| 3 \|
+-----------------+---------+---------+
[ewv@lxplus802 rucio]$ rucio list-dataset-replicas cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
DATASET: cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
+-----------------+---------+---------+
\| RSE \| FOUND \| TOTAL \|
\|-----------------+---------+---------\|
\| T0_CH_CERN_Disk \| 0 \| 0 \|
\| T2_CH_CERN \| 0 \| 0 \|
+-----------------+---------+---------+
The former uses a database shortcut which updated occasionally (and observationally, sometimes just wrong) where the latter queries every file in the dataset. The performance doesn’t seem to be much different.
```
correction from Eric:
```
Sorry, I inverted those. Deep does NOT rely on the database shortcuts while the â?onormalâ? version does.
```
|
1.0
|
add --deep flag for list_dataset_replicas call - from https://hypernews.cern.ch/HyperNews/CMS/get/computing-tools/6102.html thread:
```
One should never rely on list-dataset-replicas without the --deep flag:
[ewv@lxplus802 rucio]$ rucio list-dataset-replicas --deep cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
DATASET: cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
+-----------------+---------+---------+
\| RSE \| FOUND \| TOTAL \|
\|-----------------+---------+---------\|
\| T2_CH_CERN \| 3 \| 3 \|
\| T0_CH_CERN_Disk \| 3 \| 3 \|
+-----------------+---------+---------+
[ewv@lxplus802 rucio]$ rucio list-dataset-replicas cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
DATASET: cms:/ExpressCosmics/Commissioning2021-Express-v1/FEVT#12ac3e59-081a-49c7-9b0f-9d0fec33b0f9
+-----------------+---------+---------+
\| RSE \| FOUND \| TOTAL \|
\|-----------------+---------+---------\|
\| T0_CH_CERN_Disk \| 0 \| 0 \|
\| T2_CH_CERN \| 0 \| 0 \|
+-----------------+---------+---------+
The former uses a database shortcut which updated occasionally (and observationally, sometimes just wrong) where the latter queries every file in the dataset. The performance doesn’t seem to be much different.
```
correction from Eric:
```
Sorry, I inverted those. Deep does NOT rely on the database shortcuts while the â?onormalâ? version does.
```
|
priority
|
add deep flag for list dataset replicas call from thread one should never rely on list dataset replicas without the deep flag rucio list dataset replicas deep cms expresscosmics express fevt dataset cms expresscosmics express fevt rse found total ch cern ch cern disk rucio list dataset replicas cms expresscosmics express fevt dataset cms expresscosmics express fevt rse found total ch cern disk ch cern the former uses a database shortcut which updated occasionally and observationally sometimes just wrong where the latter queries every file in the dataset the performance doesn’t seem to be much different correction from eric sorry i inverted those deep does not rely on the database shortcuts while the â onormalâ version does
| 1
|
216,519
| 7,309,022,577
|
IssuesEvent
|
2018-02-28 10:20:44
|
opengeospatial/ets-wcs20
|
https://api.github.com/repos/opengeospatial/ets-wcs20
|
closed
|
wcseo:get-kvp-req50 fails with InvocationTargetException (eowcs)
|
bug priority:high status:to-verify
|
## Setup
SUT: TODO
Selected CCs: TODO
Tested TODO
## Testfailure
" Error in call to extension function {public org.w3c.dom.NodeList com.occamlab.te.TECore.request(org.w3c.dom.Document,java.lang.String) throws java.lang.Throwable}: Exception in extension function java.lang.reflect.InvocationTargetException"
* Failure from error log:
```
Caused by: java.lang.RuntimeException: Failed to parse resource from http://34.224.32.234:6080/arcgis/services/Mynetcdf/ImageServer/WCSServer?service=WCS&VERSION=2.0.1&request=DescribeEOCoverageSet&EOID=ds_Mynetcdf&subset=phenomenonTime("2013-03-17T00:00:00Z","2013-03-17T21:00:00Z")&containment=overlaps
Failed to parse input: sun.net.www.protocol.http.HttpURLConnection$HttpInputStream
at com.occamlab.te.parsers.XMLValidatingParser.parse(XMLValidatingParser.java:222)
... 153 mor
```
* Request send by the ETS contains `phenomenonTime("2013-03-17T00:00:00Z","2013-03-17T21:00:00Z")`
## Proposal
* encode parameter value (as in #45)
|
1.0
|
wcseo:get-kvp-req50 fails with InvocationTargetException (eowcs) - ## Setup
SUT: TODO
Selected CCs: TODO
Tested TODO
## Testfailure
" Error in call to extension function {public org.w3c.dom.NodeList com.occamlab.te.TECore.request(org.w3c.dom.Document,java.lang.String) throws java.lang.Throwable}: Exception in extension function java.lang.reflect.InvocationTargetException"
* Failure from error log:
```
Caused by: java.lang.RuntimeException: Failed to parse resource from http://34.224.32.234:6080/arcgis/services/Mynetcdf/ImageServer/WCSServer?service=WCS&VERSION=2.0.1&request=DescribeEOCoverageSet&EOID=ds_Mynetcdf&subset=phenomenonTime("2013-03-17T00:00:00Z","2013-03-17T21:00:00Z")&containment=overlaps
Failed to parse input: sun.net.www.protocol.http.HttpURLConnection$HttpInputStream
at com.occamlab.te.parsers.XMLValidatingParser.parse(XMLValidatingParser.java:222)
... 153 mor
```
* Request send by the ETS contains `phenomenonTime("2013-03-17T00:00:00Z","2013-03-17T21:00:00Z")`
## Proposal
* encode parameter value (as in #45)
|
priority
|
wcseo get kvp fails with invocationtargetexception eowcs setup sut todo selected ccs todo tested todo testfailure error in call to extension function public org dom nodelist com occamlab te tecore request org dom document java lang string throws java lang throwable exception in extension function java lang reflect invocationtargetexception failure from error log caused by java lang runtimeexception failed to parse resource from failed to parse input sun net at com occamlab te parsers xmlvalidatingparser parse xmlvalidatingparser java mor request send by the ets contains phenomenontime proposal encode parameter value as in
| 1
|
384,074
| 11,383,142,575
|
IssuesEvent
|
2020-01-29 04:51:18
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
INNODB Engine is used for some tables in mysql-cluster script
|
Priority/Highest Severity/Critical Type/Bug
|
Tables in the mysql-cluster script[1] should use NDB Engine but some tables have INNODB Engine.
[1] - https://github.com/wso2/carbon-identity-framework/blob/master/features/identity-core/org.wso2.carbon.identity.core.server.feature/resources/dbscripts/mysql-cluster.sql
|
1.0
|
INNODB Engine is used for some tables in mysql-cluster script - Tables in the mysql-cluster script[1] should use NDB Engine but some tables have INNODB Engine.
[1] - https://github.com/wso2/carbon-identity-framework/blob/master/features/identity-core/org.wso2.carbon.identity.core.server.feature/resources/dbscripts/mysql-cluster.sql
|
priority
|
innodb engine is used for some tables in mysql cluster script tables in the mysql cluster script should use ndb engine but some tables have innodb engine
| 1
|
107,738
| 4,317,226,649
|
IssuesEvent
|
2016-07-23 05:41:25
|
alexinman/clashtracker
|
https://api.github.com/repos/alexinman/clashtracker
|
closed
|
Suggestion: Documentation
|
priority: high suggestion
|
Maybe you should have a documentation on how to set this up. I usually can setup these type of stuff in a couple minutes but this one is just confusing for me. And also a demo website just so that we can see what it really does... even picture is perfectly fine.
|
1.0
|
Suggestion: Documentation - Maybe you should have a documentation on how to set this up. I usually can setup these type of stuff in a couple minutes but this one is just confusing for me. And also a demo website just so that we can see what it really does... even picture is perfectly fine.
|
priority
|
suggestion documentation maybe you should have a documentation on how to set this up i usually can setup these type of stuff in a couple minutes but this one is just confusing for me and also a demo website just so that we can see what it really does even picture is perfectly fine
| 1
|
65,816
| 3,240,888,831
|
IssuesEvent
|
2015-10-15 07:58:28
|
ManoSeimas/manoseimas.lt
|
https://api.github.com/repos/ManoSeimas/manoseimas.lt
|
closed
|
Add JSON endpoint for suggesters-state and suggesters-non-state
|
enhancement priority: 1 - high
|
Client has asked to have separate tabs for government law project suggesters and for NGOs,
Government law project suggesters are:
- commissions
- MPs
- cabinet
- ?
|
1.0
|
Add JSON endpoint for suggesters-state and suggesters-non-state - Client has asked to have separate tabs for government law project suggesters and for NGOs,
Government law project suggesters are:
- commissions
- MPs
- cabinet
- ?
|
priority
|
add json endpoint for suggesters state and suggesters non state client has asked to have separate tabs for government law project suggesters and for ngos government law project suggesters are commissions mps cabinet
| 1
|
669,518
| 22,629,312,678
|
IssuesEvent
|
2022-06-30 13:27:35
|
heading1/WYLSBingsu
|
https://api.github.com/repos/heading1/WYLSBingsu
|
closed
|
[BE] article 상세페이지 API
|
⚙️ Backend ❗️high-priority 🔨 Feature
|
## 🔨 기능 설명
상세페이지 API
## 📑 완료 조건
해당 글 상세페이지 요청
## 💭 관련 백로그
[[BE] 메인 페이지]-[API]-[article 상세페이지]
## 💭 예상 작업 시간
8h
|
1.0
|
[BE] article 상세페이지 API - ## 🔨 기능 설명
상세페이지 API
## 📑 완료 조건
해당 글 상세페이지 요청
## 💭 관련 백로그
[[BE] 메인 페이지]-[API]-[article 상세페이지]
## 💭 예상 작업 시간
8h
|
priority
|
article 상세페이지 api 🔨 기능 설명 상세페이지 api 📑 완료 조건 해당 글 상세페이지 요청 💭 관련 백로그 메인 페이지 💭 예상 작업 시간
| 1
|
319,139
| 9,739,452,221
|
IssuesEvent
|
2019-06-01 11:34:29
|
Abnaxos/compose
|
https://api.github.com/repos/Abnaxos/compose
|
opened
|
Split `ThreadingModule` for fork/join and workers
|
component: core-modules priority: high type: task
|
Various framework may provide various methods. e.g. the whole worker part could be replaced with vert.x while still keeping a global fork/join pool. Split these two aspects into separate modules.
|
1.0
|
Split `ThreadingModule` for fork/join and workers - Various framework may provide various methods. e.g. the whole worker part could be replaced with vert.x while still keeping a global fork/join pool. Split these two aspects into separate modules.
|
priority
|
split threadingmodule for fork join and workers various framework may provide various methods e g the whole worker part could be replaced with vert x while still keeping a global fork join pool split these two aspects into separate modules
| 1
|
181,638
| 6,662,895,483
|
IssuesEvent
|
2017-10-02 14:38:43
|
semperfiwebdesign/all-in-one-seo-pack
|
https://api.github.com/repos/semperfiwebdesign/all-in-one-seo-pack
|
closed
|
Google Search Console parsing errors in sitemap
|
Bug Priority | High
|
Reported here: https://wordpress.org/support/topic/version-2-4-broke-sitemap/#post-9542902
and here: https://wordpress.org/support/topic/version-2-4-broke-my-sitemap/#post-9542883
and here: http://filmcutting.com/sitemap.xml
and here: https://wordpress.org/support/topic/version-2-4-broke-sitemap/#post-9542902
|
1.0
|
Google Search Console parsing errors in sitemap - Reported here: https://wordpress.org/support/topic/version-2-4-broke-sitemap/#post-9542902
and here: https://wordpress.org/support/topic/version-2-4-broke-my-sitemap/#post-9542883
and here: http://filmcutting.com/sitemap.xml
and here: https://wordpress.org/support/topic/version-2-4-broke-sitemap/#post-9542902
|
priority
|
google search console parsing errors in sitemap reported here and here and here and here
| 1
|
413,784
| 12,092,100,230
|
IssuesEvent
|
2020-04-19 14:23:21
|
perry-mitchell/webdav-client
|
https://api.github.com/repos/perry-mitchell/webdav-client
|
closed
|
Encoded Characters in directory contents
|
Effort: Low Priority: High Status: Completed Type: Bug
|
A file or directory with an & in the name returns as & rather than as &, These should be decoded.
This occurs in chrome - I believe this is because the fast-xml-parser does not decode string values by default.
The api for fast-xml-parser needs a tag/attrValueProcessor option to do the decoding.
I am working around it for now by decoding basename and filename after content fetching.
This also means that directories with an & in the name have a self entry in the directory listing as the & name does not match for the webdav-client to strip out.
|
1.0
|
Encoded Characters in directory contents - A file or directory with an & in the name returns as & rather than as &, These should be decoded.
This occurs in chrome - I believe this is because the fast-xml-parser does not decode string values by default.
The api for fast-xml-parser needs a tag/attrValueProcessor option to do the decoding.
I am working around it for now by decoding basename and filename after content fetching.
This also means that directories with an & in the name have a self entry in the directory listing as the & name does not match for the webdav-client to strip out.
|
priority
|
encoded characters in directory contents a file or directory with an in the name returns as amp rather than as these should be decoded this occurs in chrome i believe this is because the fast xml parser does not decode string values by default the api for fast xml parser needs a tag attrvalueprocessor option to do the decoding i am working around it for now by decoding basename and filename after content fetching this also means that directories with an in the name have a self entry in the directory listing as the amp name does not match for the webdav client to strip out
| 1
|
42,285
| 2,870,036,461
|
IssuesEvent
|
2015-06-06 19:19:21
|
Baystation12/Baystation12
|
https://api.github.com/repos/Baystation12/Baystation12
|
closed
|
[DEV-FREEZE] Noticable lag when shooting guns
|
bug priority: high
|
There seems to be a noticeable lag when shooting guns, particularly in burst fire. Might be due to spawning a bunch of light effect objects, not sure though.
|
1.0
|
[DEV-FREEZE] Noticable lag when shooting guns - There seems to be a noticeable lag when shooting guns, particularly in burst fire. Might be due to spawning a bunch of light effect objects, not sure though.
|
priority
|
noticable lag when shooting guns there seems to be a noticeable lag when shooting guns particularly in burst fire might be due to spawning a bunch of light effect objects not sure though
| 1
|
649,200
| 21,259,270,260
|
IssuesEvent
|
2022-04-13 01:03:49
|
hackforla/expunge-assist
|
https://api.github.com/repos/hackforla/expunge-assist
|
closed
|
Change copy in prototype before the MVP launch
|
role: development priority: high size: 1pt feature: figma mobile prototype feature: figma desktop prototype feature: figma content writing
|
### Overview
Content team has made smaller copy changes to a few screens to update the overall tone of the product before the MVP launch from a congratulatory, joyous tone to an empathetic, supportive tone.
Changes are written in the google doc below, mockups are in the main figma content writing tab, and Sam is changing the copy in the EA main figma prototypes.
Contact Sam with any questions.
### Action Items
- [ ] Change copy on prototype screens
- [ ] See comments on Desktop prototype in Figma regarding spacing
- [ ] Additionally, change "Introduce Yourself!" to "Introduce Yourself" (remove exclamation mark) on all frames under "Introduce Yourself" sections in both mobile and desktop prototypes
### Resources/Instructions
Copy is in [this](https://docs.google.com/document/d/12cTlPhxR_nPym6OcsMfnYtsRlVAW7jYH29wHp9Mvg5I/edit) document.
Mockups in Expunge Assist Main Figma under [content tab](https://www.figma.com/file/hYqRxmBVtJbDv9DJXV6nra/Expunge-Assist-Main-Figma?node-id=2%3A5).
|
1.0
|
Change copy in prototype before the MVP launch - ### Overview
Content team has made smaller copy changes to a few screens to update the overall tone of the product before the MVP launch from a congratulatory, joyous tone to an empathetic, supportive tone.
Changes are written in the google doc below, mockups are in the main figma content writing tab, and Sam is changing the copy in the EA main figma prototypes.
Contact Sam with any questions.
### Action Items
- [ ] Change copy on prototype screens
- [ ] See comments on Desktop prototype in Figma regarding spacing
- [ ] Additionally, change "Introduce Yourself!" to "Introduce Yourself" (remove exclamation mark) on all frames under "Introduce Yourself" sections in both mobile and desktop prototypes
### Resources/Instructions
Copy is in [this](https://docs.google.com/document/d/12cTlPhxR_nPym6OcsMfnYtsRlVAW7jYH29wHp9Mvg5I/edit) document.
Mockups in Expunge Assist Main Figma under [content tab](https://www.figma.com/file/hYqRxmBVtJbDv9DJXV6nra/Expunge-Assist-Main-Figma?node-id=2%3A5).
|
priority
|
change copy in prototype before the mvp launch overview content team has made smaller copy changes to a few screens to update the overall tone of the product before the mvp launch from a congratulatory joyous tone to an empathetic supportive tone changes are written in the google doc below mockups are in the main figma content writing tab and sam is changing the copy in the ea main figma prototypes contact sam with any questions action items change copy on prototype screens see comments on desktop prototype in figma regarding spacing additionally change introduce yourself to introduce yourself remove exclamation mark on all frames under introduce yourself sections in both mobile and desktop prototypes resources instructions copy is in document mockups in expunge assist main figma under
| 1
|
393,735
| 11,624,096,274
|
IssuesEvent
|
2020-02-27 10:09:00
|
christopherpeters-git/hololens-project
|
https://api.github.com/repos/christopherpeters-git/hololens-project
|
closed
|
fix performance issues
|
high priority
|
inspiration:
https://forums.hololens.com/discussion/409/low-fps-in-hololens-app
https://docs.microsoft.com/en-us/windows/mixed-reality/performance-recommendations-for-unity
- [ ] deactivate wireframing when Deploying to HoloLens
|
1.0
|
fix performance issues - inspiration:
https://forums.hololens.com/discussion/409/low-fps-in-hololens-app
https://docs.microsoft.com/en-us/windows/mixed-reality/performance-recommendations-for-unity
- [ ] deactivate wireframing when Deploying to HoloLens
|
priority
|
fix performance issues inspiration deactivate wireframing when deploying to hololens
| 1
|
509,045
| 14,711,549,016
|
IssuesEvent
|
2021-01-05 07:36:30
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Transaction initiator block cannot be used in main method
|
Points/1 Priority/High Team/jBallerina Type/Bug
|
**Description:**
When we have transaction initiator inside main method , it will file as coordination services will start after main method exit.
ex.
Sample code to reproduce:
-- Service --
```ballerina
import ballerina/http;
import ballerina/log;
import ballerina/transactions;
@http:ServiceConfig {
basePath: "/"
}
service hello on new http:Listener(8991) {
@http:ResourceConfig {
methods: ["POST"],
path: "/"
}
@transactions:Participant { }
resource function sayHello(http:Caller caller, http:Request req) {
var result = caller->respond("Hello, World!");
if (result is error) {
log:printError("Error sending response", err = result);
}
}
}
```
-- Transaction initiator --
```ballerina
import ballerina/http;
import ballerina/io;
http:Client clientEndpoint = new("http://localhost:8991/");
public function main(string... args) {
http:Request req = new;
req.setPayload("POST: Hello World");
transaction {
var response = clientEndpoint->post("/", req);
io:println(response);
} onretry {
io:println("retrying");
}
}
```
|
1.0
|
Transaction initiator block cannot be used in main method - **Description:**
When we have transaction initiator inside main method , it will file as coordination services will start after main method exit.
ex.
Sample code to reproduce:
-- Service --
```ballerina
import ballerina/http;
import ballerina/log;
import ballerina/transactions;
@http:ServiceConfig {
basePath: "/"
}
service hello on new http:Listener(8991) {
@http:ResourceConfig {
methods: ["POST"],
path: "/"
}
@transactions:Participant { }
resource function sayHello(http:Caller caller, http:Request req) {
var result = caller->respond("Hello, World!");
if (result is error) {
log:printError("Error sending response", err = result);
}
}
}
```
-- Transaction initiator --
```ballerina
import ballerina/http;
import ballerina/io;
http:Client clientEndpoint = new("http://localhost:8991/");
public function main(string... args) {
http:Request req = new;
req.setPayload("POST: Hello World");
transaction {
var response = clientEndpoint->post("/", req);
io:println(response);
} onretry {
io:println("retrying");
}
}
```
|
priority
|
transaction initiator block cannot be used in main method description when we have transaction initiator inside main method it will file as coordination services will start after main method exit ex sample code to reproduce service ballerina import ballerina http import ballerina log import ballerina transactions http serviceconfig basepath service hello on new http listener http resourceconfig methods path transactions participant resource function sayhello http caller caller http request req var result caller respond hello world if result is error log printerror error sending response err result transaction initiator ballerina import ballerina http import ballerina io http client clientendpoint new public function main string args http request req new req setpayload post hello world transaction var response clientendpoint post req io println response onretry io println retrying
| 1
|
22,305
| 2,648,683,821
|
IssuesEvent
|
2015-03-14 04:38:15
|
jeffbryner/MozDef
|
https://api.github.com/repos/jeffbryner/MozDef
|
closed
|
WebUI: Incident Veris Stat visualization enhancement
|
category:enhancement priority:high
|
Make use of the pivot table library to allow greater filtering/sorting/playing with veris tags for incidents.
|
1.0
|
WebUI: Incident Veris Stat visualization enhancement - Make use of the pivot table library to allow greater filtering/sorting/playing with veris tags for incidents.
|
priority
|
webui incident veris stat visualization enhancement make use of the pivot table library to allow greater filtering sorting playing with veris tags for incidents
| 1
|
156,919
| 5,991,019,410
|
IssuesEvent
|
2017-06-02 13:09:39
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio-ui] 2.5 bulk go live says "Bulk go live failed" when REST call times out (http conn timeout)
|
bug Priority: High
|
Currently, the Bulk Go Live UI says "bulk go live failed" when the request times out. This is not accurate -- the request is still going on the server.
In the timeout scenario:
Can we report: 'The bulk publish operation is still be running.'
|
1.0
|
[studio-ui] 2.5 bulk go live says "Bulk go live failed" when REST call times out (http conn timeout) - Currently, the Bulk Go Live UI says "bulk go live failed" when the request times out. This is not accurate -- the request is still going on the server.
In the timeout scenario:
Can we report: 'The bulk publish operation is still be running.'
|
priority
|
bulk go live says bulk go live failed when rest call times out http conn timeout currently the bulk go live ui says bulk go live failed when the request times out this is not accurate the request is still going on the server in the timeout scenario can we report the bulk publish operation is still be running
| 1
|
700,375
| 24,058,640,636
|
IssuesEvent
|
2022-09-16 19:33:21
|
DSpace/dspace-angular
|
https://api.github.com/repos/DSpace/dspace-angular
|
closed
|
Cookie consent shows analytics usage also when GA is disabled
|
bug component: statistics high priority e/16 privacy
|
**Describe the bug**
The Cookie consent dialog doesn't look to the google analytics property to decide if a consent is needed about statistics service or not.
**To Reproduce**
This can be verified on the official demo opening the cookie consent dialog and looking to the statistics service still present when there is no google analytics key configured.
**Expected behavior**
The statistics section should appear only when appropriate
We (4Science) are interested on working on that, it should take around **2 days on the Angular side**
|
1.0
|
Cookie consent shows analytics usage also when GA is disabled - **Describe the bug**
The Cookie consent dialog doesn't look to the google analytics property to decide if a consent is needed about statistics service or not.
**To Reproduce**
This can be verified on the official demo opening the cookie consent dialog and looking to the statistics service still present when there is no google analytics key configured.
**Expected behavior**
The statistics section should appear only when appropriate
We (4Science) are interested on working on that, it should take around **2 days on the Angular side**
|
priority
|
cookie consent shows analytics usage also when ga is disabled describe the bug the cookie consent dialog doesn t look to the google analytics property to decide if a consent is needed about statistics service or not to reproduce this can be verified on the official demo opening the cookie consent dialog and looking to the statistics service still present when there is no google analytics key configured expected behavior the statistics section should appear only when appropriate we are interested on working on that it should take around days on the angular side
| 1
|
431,505
| 12,480,492,941
|
IssuesEvent
|
2020-05-29 20:27:05
|
RichardFav/AnalysisGUI
|
https://api.github.com/repos/RichardFav/AnalysisGUI
|
closed
|
"Speed LDA Comparison (pooled experiments)" gets exponentially slower during shuffling and seems to saturate or get stuck
|
HIGH priority bug
|
test experiment: /home/skeshav/code/Analysis/data_files/2_Input_Files/Z3 - Multi Experiment Data Files/ALL-fixed-cells-final-Filtered.mdata
LDA parameters: min cell count=12; min trial count=14; comparison conditions=black, uniform, solver=eigen; cell signal type=all cells
This happens even with the smallest number of shuffles (5) when "pool all experiments" is selected. We used to be able to run both this and the "pooled neuron LDA function" (#48) with 500 shuffles and using the "pool all experiments" option. We need to do 1000 shuffles for each of these functions. Currently this is impossible (even with 5).
Please note that we also need to be able to run this on 3 comparison conditions (not just the two above) which, similar to issue #48, will include black, uniform and motordrifting.
|
1.0
|
"Speed LDA Comparison (pooled experiments)" gets exponentially slower during shuffling and seems to saturate or get stuck - test experiment: /home/skeshav/code/Analysis/data_files/2_Input_Files/Z3 - Multi Experiment Data Files/ALL-fixed-cells-final-Filtered.mdata
LDA parameters: min cell count=12; min trial count=14; comparison conditions=black, uniform, solver=eigen; cell signal type=all cells
This happens even with the smallest number of shuffles (5) when "pool all experiments" is selected. We used to be able to run both this and the "pooled neuron LDA function" (#48) with 500 shuffles and using the "pool all experiments" option. We need to do 1000 shuffles for each of these functions. Currently this is impossible (even with 5).
Please note that we also need to be able to run this on 3 comparison conditions (not just the two above) which, similar to issue #48, will include black, uniform and motordrifting.
|
priority
|
speed lda comparison pooled experiments gets exponentially slower during shuffling and seems to saturate or get stuck test experiment home skeshav code analysis data files input files multi experiment data files all fixed cells final filtered mdata lda parameters min cell count min trial count comparison conditions black uniform solver eigen cell signal type all cells this happens even with the smallest number of shuffles when pool all experiments is selected we used to be able to run both this and the pooled neuron lda function with shuffles and using the pool all experiments option we need to do shuffles for each of these functions currently this is impossible even with please note that we also need to be able to run this on comparison conditions not just the two above which similar to issue will include black uniform and motordrifting
| 1
|
593,332
| 17,970,598,193
|
IssuesEvent
|
2021-09-14 01:07:44
|
AgoraCloud/server
|
https://api.github.com/repos/AgoraCloud/server
|
opened
|
Deployments Overhaul - Part 4 - Add Deployment Scaling Method
|
enhancement priority:high
|
# Overview
## To Do
- [ ]
- [ ]
- [ ]
- [ ]
- [ ]
|
1.0
|
Deployments Overhaul - Part 4 - Add Deployment Scaling Method - # Overview
## To Do
- [ ]
- [ ]
- [ ]
- [ ]
- [ ]
|
priority
|
deployments overhaul part add deployment scaling method overview to do
| 1
|
664,411
| 22,269,550,828
|
IssuesEvent
|
2022-06-10 10:51:44
|
alephium/explorer-backend
|
https://api.github.com/repos/alephium/explorer-backend
|
opened
|
Use `AsyncReloadingCache` for block count
|
high priority
|
@tdroxler @simerplaha As far as I see, it's not using async cache
|
1.0
|
Use `AsyncReloadingCache` for block count - @tdroxler @simerplaha As far as I see, it's not using async cache
|
priority
|
use asyncreloadingcache for block count tdroxler simerplaha as far as i see it s not using async cache
| 1
|
61,768
| 3,152,658,762
|
IssuesEvent
|
2015-09-16 14:47:45
|
angular/material
|
https://api.github.com/repos/angular/material
|
opened
|
autocomplete: md-item-template hard-codes "item"; doesn't allow custom expression
|
priority: high
|
Since the merging of #4391, `item` is now hard-coded into the item template, so custom expressions like `category in $scope.categories` no longer work.
Possible solution: use a `for` loop to copy all `hasOwnProperty()` props instead of just `item` and `$index`.
|
1.0
|
autocomplete: md-item-template hard-codes "item"; doesn't allow custom expression - Since the merging of #4391, `item` is now hard-coded into the item template, so custom expressions like `category in $scope.categories` no longer work.
Possible solution: use a `for` loop to copy all `hasOwnProperty()` props instead of just `item` and `$index`.
|
priority
|
autocomplete md item template hard codes item doesn t allow custom expression since the merging of item is now hard coded into the item template so custom expressions like category in scope categories no longer work possible solution use a for loop to copy all hasownproperty props instead of just item and index
| 1
|
551,059
| 16,136,884,473
|
IssuesEvent
|
2021-04-29 12:59:12
|
wso2/docs-apim
|
https://api.github.com/repos/wso2/docs-apim
|
closed
|
Need to update the WSO2 IS Connector for APIM 4.0.0
|
API-M-4.0.0 Priority/Highest Severity/Critical help wanted
|
**Description:**
Need to **get the artefact and update** the WSO2 IS Connector for APIM 4.0.0 in the following pages.
Note that the names of the pages will change as APIM 4.0.0 is based on 5110
- en/docs/administer/key-managers/configure-wso2is-connector.md
- en/docs/install-and-setup/setup/distributed-deployment/configuring-wso2-identity-server-as-a-key-manager.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-5100-to-is-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-520-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-530-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-550-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-560-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-590-to-5100.md
|
1.0
|
Need to update the WSO2 IS Connector for APIM 4.0.0 - **Description:**
Need to **get the artefact and update** the WSO2 IS Connector for APIM 4.0.0 in the following pages.
Note that the names of the pages will change as APIM 4.0.0 is based on 5110
- en/docs/administer/key-managers/configure-wso2is-connector.md
- en/docs/install-and-setup/setup/distributed-deployment/configuring-wso2-identity-server-as-a-key-manager.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-5100-to-is-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-520-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-530-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-550-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-560-to-5100.md
- en/docs/install-and-setup/upgrading-wso2-is-as-key-manager/upgrading-from-is-km-590-to-5100.md
|
priority
|
need to update the is connector for apim description need to get the artefact and update the is connector for apim in the following pages note that the names of the pages will change as apim is based on en docs administer key managers configure connector md en docs install and setup setup distributed deployment configuring identity server as a key manager md en docs install and setup upgrading is as key manager upgrading from is km to is md en docs install and setup upgrading is as key manager upgrading from is km to md en docs install and setup upgrading is as key manager upgrading from is km to md en docs install and setup upgrading is as key manager upgrading from is km to md en docs install and setup upgrading is as key manager upgrading from is km to md en docs install and setup upgrading is as key manager upgrading from is km to md
| 1
|
370,788
| 10,948,760,267
|
IssuesEvent
|
2019-11-26 09:32:24
|
Sp2000/colplus-frontend
|
https://api.github.com/repos/Sp2000/colplus-frontend
|
closed
|
root taxa in tree not pageable
|
bug high priority
|
It seems that the UI expected a maximum of 10 root taxa in the classification tree.
If there are more there is no way to browse them:
This shows only 10 superfamilies:
https://data.catalogue.life/dataset/2056/classification
but there are many more available via the search:
https://data.catalogue.life/dataset/2056/names?rank=superfamily
|
1.0
|
root taxa in tree not pageable - It seems that the UI expected a maximum of 10 root taxa in the classification tree.
If there are more there is no way to browse them:
This shows only 10 superfamilies:
https://data.catalogue.life/dataset/2056/classification
but there are many more available via the search:
https://data.catalogue.life/dataset/2056/names?rank=superfamily
|
priority
|
root taxa in tree not pageable it seems that the ui expected a maximum of root taxa in the classification tree if there are more there is no way to browse them this shows only superfamilies but there are many more available via the search
| 1
|
450,634
| 13,017,630,556
|
IssuesEvent
|
2020-07-26 13:29:43
|
SeaLoong/BLRHH
|
https://api.github.com/repos/SeaLoong/BLRHH
|
closed
|
[自动抽奖][实物抽奖]请升级版本
|
API update high priority
|
**描述bug**
简要描述所遇到的bug
[BLRHH][下午8:19:03][自动抽奖][实物抽奖]"BILIBILI 11周年演讲 宝箱抽奖"(aid=582,number=5)请升级版本
**重现bug**
说明您在进行了怎样的操作后出现了bug
1. 自动抽奖
2. 实物抽奖
3. BILIBILI 11周年演讲 宝箱抽奖
**预期行为**
简要描述进行以上操作后预期的脚本行为
**截图**
如果可以,请提供有关截图

**使用环境:**
- 浏览器: Chrome
- 浏览器版本: Google Chrome 77.0.3865.120
- 脚本的版本: 2.4.11
- 网络情况(网速): 好
- 其他浏览器插件/脚本: -
- bug出现时间: 2020年6月26日 20:24:53
**其他**
若您有其他想要补充的内容,请在此说明
大清亡啦…
|
1.0
|
[自动抽奖][实物抽奖]请升级版本 - **描述bug**
简要描述所遇到的bug
[BLRHH][下午8:19:03][自动抽奖][实物抽奖]"BILIBILI 11周年演讲 宝箱抽奖"(aid=582,number=5)请升级版本
**重现bug**
说明您在进行了怎样的操作后出现了bug
1. 自动抽奖
2. 实物抽奖
3. BILIBILI 11周年演讲 宝箱抽奖
**预期行为**
简要描述进行以上操作后预期的脚本行为
**截图**
如果可以,请提供有关截图

**使用环境:**
- 浏览器: Chrome
- 浏览器版本: Google Chrome 77.0.3865.120
- 脚本的版本: 2.4.11
- 网络情况(网速): 好
- 其他浏览器插件/脚本: -
- bug出现时间: 2020年6月26日 20:24:53
**其他**
若您有其他想要补充的内容,请在此说明
大清亡啦…
|
priority
|
请升级版本 描述bug 简要描述所遇到的bug bilibili 宝箱抽奖 aid number 请升级版本 重现bug 说明您在进行了怎样的操作后出现了bug 自动抽奖 实物抽奖 bilibili 宝箱抽奖 预期行为 简要描述进行以上操作后预期的脚本行为 截图 如果可以,请提供有关截图 使用环境 浏览器 chrome 浏览器版本 google chrome 脚本的版本 网络情况 网速 好 其他浏览器插件 脚本 bug出现时间 其他 若您有其他想要补充的内容,请在此说明 大清亡啦…
| 1
|
643,235
| 20,926,529,048
|
IssuesEvent
|
2022-03-24 23:58:19
|
kpwhri/heartsteps
|
https://api.github.com/repos/kpwhri/heartsteps
|
closed
|
Notify every day's step goal with a notification.
|
high priority feature pre-launch
|
Every morning, daily step goals should be notified to the participant with notification.
The notification message should be like "Today, your suggested step goal is XXXX." with thumbs up and down.
** Todo **
- [x] Create a survey
- [x] Send the survey to the test user
- [x] Create a task
- [x] Test
|
1.0
|
Notify every day's step goal with a notification. - Every morning, daily step goals should be notified to the participant with notification.
The notification message should be like "Today, your suggested step goal is XXXX." with thumbs up and down.
** Todo **
- [x] Create a survey
- [x] Send the survey to the test user
- [x] Create a task
- [x] Test
|
priority
|
notify every day s step goal with a notification every morning daily step goals should be notified to the participant with notification the notification message should be like today your suggested step goal is xxxx with thumbs up and down todo create a survey send the survey to the test user create a task test
| 1
|
180,493
| 6,650,259,650
|
IssuesEvent
|
2017-09-28 15:44:10
|
GluuFederation/oxShibboleth
|
https://api.github.com/repos/GluuFederation/oxShibboleth
|
closed
|
Update our Saml authentication code to use IDP 3 flows
|
bug enhancement High Priority
|
In our IDP3 we uses old IDP2 filters to do Saml authentication. But IDP3 has flow specially developed for this. There are case when our old integration led to errors. Person in some circumstances not getting login form because we are not initialization flow properly.
|
1.0
|
Update our Saml authentication code to use IDP 3 flows - In our IDP3 we uses old IDP2 filters to do Saml authentication. But IDP3 has flow specially developed for this. There are case when our old integration led to errors. Person in some circumstances not getting login form because we are not initialization flow properly.
|
priority
|
update our saml authentication code to use idp flows in our we uses old filters to do saml authentication but has flow specially developed for this there are case when our old integration led to errors person in some circumstances not getting login form because we are not initialization flow properly
| 1
|
795,756
| 28,085,214,729
|
IssuesEvent
|
2023-03-30 09:19:17
|
wso2/docs-apim
|
https://api.github.com/repos/wso2/docs-apim
|
closed
|
Moving "What has changed" docs in migration docs to "About this release" page
|
Priority/High API-M 4.2.0
|
**Description:**
Need to move "What has changed" docs in migration docs to "About this release" page of each product release.
Also change the below line [1] under prerequisites (in 3.2.0 to 4.2.0 migration doc) where we link to the what has changed content.
_**Review what has been changed in this release. See the What Has Changed page.**_
To below, while linking to the respective new content in "About this release" pages.
**Review what has been changed in this release. See the "What Has Changed" sections regarding to each release, that you are surpassing when performing the migration. Refer [4.0.0](test2) , [4.1.0](test3), and [4.2.0](test4)**
[1] https://github.com/wso2-enterprise/migration-docs/blob/main/api-manager/migration-docs/apim/apim-4.2.0/apim/upgrading-from-320-to-420.md?plain=1#L7
|
1.0
|
Moving "What has changed" docs in migration docs to "About this release" page - **Description:**
Need to move "What has changed" docs in migration docs to "About this release" page of each product release.
Also change the below line [1] under prerequisites (in 3.2.0 to 4.2.0 migration doc) where we link to the what has changed content.
_**Review what has been changed in this release. See the What Has Changed page.**_
To below, while linking to the respective new content in "About this release" pages.
**Review what has been changed in this release. See the "What Has Changed" sections regarding to each release, that you are surpassing when performing the migration. Refer [4.0.0](test2) , [4.1.0](test3), and [4.2.0](test4)**
[1] https://github.com/wso2-enterprise/migration-docs/blob/main/api-manager/migration-docs/apim/apim-4.2.0/apim/upgrading-from-320-to-420.md?plain=1#L7
|
priority
|
moving what has changed docs in migration docs to about this release page description need to move what has changed docs in migration docs to about this release page of each product release also change the below line under prerequisites in to migration doc where we link to the what has changed content review what has been changed in this release see the what has changed page to below while linking to the respective new content in about this release pages review what has been changed in this release see the what has changed sections regarding to each release that you are surpassing when performing the migration refer and
| 1
|
465,165
| 13,357,960,683
|
IssuesEvent
|
2020-08-31 10:47:05
|
nf-core/proteomicslfq
|
https://api.github.com/repos/nf-core/proteomicslfq
|
closed
|
mzTab output errors and improvements
|
bug enhancement high-priority
|
@jpfeuffer @timo:
We have been exploring the mzTab output and detect the following issues:
- [x] all proteins are tag as `indistinguishable_proteins`. @jpfeuffer mention that some proteins should be `single_protein`
- [x] The mzML data `format`, `id_format` from the metadata are wrong:
- [x] `ms_run[n]-format=mzML format` should be `ms_run[n]-format=[MS, MS:1000584, mzML file, ]`
- `ms_run[n]-id_format=mzML unique identifier` should be `ms_run[n]-id_format=[MS, MS:1000768, Thermo nativeID format, ]`
- [ ] `study_variable[n]-description=no description given` we actually knows this from the experimental design.
- [ ] the optional global properties in mztab should contains the CvTerm accession, in the current format the folowing changes should be made:
- opt_global_FFId_category ->
- opt_global_map_index ->
- [x] opt_global_spectrum_reference -> This is already in he header.
- [x] opt_global_modified_sequence -> opt_global_cv_MS:1000889_peptidoform_sequence
- opt_global_feature_id -> @timosachsenberg is this term https://www.ebi.ac.uk/ols/ontologies/ms/terms?iri=http%3A%2F%2Fpurl.obolibrary.org%2Fobo%2FMS_1002191&viewMode=All&siblings=false
- [x] for decoy proteins and peptides and psms the following columns should be added
- peptide and psm: `opt_global_cv_MS:1002217_decoy_peptide` the value should be 1 for decoy 0 for target
- protein: `opt_global_cv_PRIDE:0000303_decoy_hit` the value should be 1 for decoy and 0 for target.
|
1.0
|
mzTab output errors and improvements - @jpfeuffer @timo:
We have been exploring the mzTab output and detect the following issues:
- [x] all proteins are tag as `indistinguishable_proteins`. @jpfeuffer mention that some proteins should be `single_protein`
- [x] The mzML data `format`, `id_format` from the metadata are wrong:
- [x] `ms_run[n]-format=mzML format` should be `ms_run[n]-format=[MS, MS:1000584, mzML file, ]`
- `ms_run[n]-id_format=mzML unique identifier` should be `ms_run[n]-id_format=[MS, MS:1000768, Thermo nativeID format, ]`
- [ ] `study_variable[n]-description=no description given` we actually knows this from the experimental design.
- [ ] the optional global properties in mztab should contains the CvTerm accession, in the current format the folowing changes should be made:
- opt_global_FFId_category ->
- opt_global_map_index ->
- [x] opt_global_spectrum_reference -> This is already in he header.
- [x] opt_global_modified_sequence -> opt_global_cv_MS:1000889_peptidoform_sequence
- opt_global_feature_id -> @timosachsenberg is this term https://www.ebi.ac.uk/ols/ontologies/ms/terms?iri=http%3A%2F%2Fpurl.obolibrary.org%2Fobo%2FMS_1002191&viewMode=All&siblings=false
- [x] for decoy proteins and peptides and psms the following columns should be added
- peptide and psm: `opt_global_cv_MS:1002217_decoy_peptide` the value should be 1 for decoy 0 for target
- protein: `opt_global_cv_PRIDE:0000303_decoy_hit` the value should be 1 for decoy and 0 for target.
|
priority
|
mztab output errors and improvements jpfeuffer timo we have been exploring the mztab output and detect the following issues all proteins are tag as indistinguishable proteins jpfeuffer mention that some proteins should be single protein the mzml data format id format from the metadata are wrong ms run format mzml format should be ms run format ms run id format mzml unique identifier should be ms run id format study variable description no description given we actually knows this from the experimental design the optional global properties in mztab should contains the cvterm accession in the current format the folowing changes should be made opt global ffid category opt global map index opt global spectrum reference this is already in he header opt global modified sequence opt global cv ms peptidoform sequence opt global feature id timosachsenberg is this term for decoy proteins and peptides and psms the following columns should be added peptide and psm opt global cv ms decoy peptide the value should be for decoy for target protein opt global cv pride decoy hit the value should be for decoy and for target
| 1
|
249,025
| 7,948,732,919
|
IssuesEvent
|
2018-07-11 09:05:43
|
wordpress-mobile/AztecEditor-Android
|
https://api.github.com/repos/wordpress-mobile/AztecEditor-Android
|
reopened
|
OOB Crash in AztecAttributes setValue/removeAttribute
|
bug high priority
|
```
Fatal Exception: java.lang.ArrayIndexOutOfBoundsException: length=5; index=7
at org.xml.sax.helpers.AttributesImpl.addAttribute(AttributesImpl.java:385)
at org.wordpress.aztec.AztecAttributes.setValue(AztecAttributes.kt:10)
at org.wordpress.aztec.source.CssStyleFormatter$Companion.addStyleAttribute(CssStyleFormatter.kt:123)
at org.wordpress.aztec.AztecParser.withinNestable(AztecParser.kt:386)
at org.wordpress.aztec.AztecParser.withinHtml(AztecParser.kt:359)
at org.wordpress.aztec.AztecParser.withinHtml(AztecParser.kt:311)
at org.wordpress.aztec.AztecParser.toHtml(AztecParser.kt:92)
at org.wordpress.aztec.AztecText.toPlainHtml(AztecText.kt:1142)
at org.wordpress.aztec.AztecText.toHtml(AztecText.kt:1105)
at org.wordpress.aztec.AztecText.toHtml$default(AztecText.kt:1104)
at org.wordpress.aztec.AztecText.toFormattedHtml(AztecText.kt:1146)
at org.wordpress.aztec.History.beforeTextChanged(History.kt:42)
at org.wordpress.aztec.AztecText$addHistoryLoggingWatcher$historyLoggingWatcher$1.beforeTextChanged(AztecText.kt:480)
at android.widget.TextView.sendBeforeTextChanged(TextView.java:9144)
at android.widget.TextView.access$1600(TextView.java:326)
at android.widget.TextView$ChangeWatcher.beforeTextChanged(TextView.java:11954)
at android.text.SpannableStringBuilder.sendBeforeTextChanged(SpannableStringBuilder.java:1027)
at android.text.SpannableStringBuilder.replace(SpannableStringBuilder.java:523)
at android.text.SpannableStringBuilder.replace(SpannableStringBuilder.java:494)
at android.text.SpannableStringBuilder.replace(SpannableStringBuilder.java:34)
at android.view.inputmethod.BaseInputConnection.replaceText(BaseInputConnection.java:691)
at android.view.inputmethod.BaseInputConnection.setComposingText(BaseInputConnection.java:447)
at com.android.internal.view.IInputConnectionWrapper.executeMessage(IInputConnectionWrapper.java:340)
at com.android.internal.view.IInputConnectionWrapper$MyHandler.handleMessage(IInputConnectionWrapper.java:78)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:158)
at android.app.ActivityThread.main(ActivityThread.java:7230)
at java.lang.reflect.Method.invoke(Method.java)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120)
```
Internal Ref: 5a8423408cb3c2fa6302badc
|
1.0
|
OOB Crash in AztecAttributes setValue/removeAttribute - ```
Fatal Exception: java.lang.ArrayIndexOutOfBoundsException: length=5; index=7
at org.xml.sax.helpers.AttributesImpl.addAttribute(AttributesImpl.java:385)
at org.wordpress.aztec.AztecAttributes.setValue(AztecAttributes.kt:10)
at org.wordpress.aztec.source.CssStyleFormatter$Companion.addStyleAttribute(CssStyleFormatter.kt:123)
at org.wordpress.aztec.AztecParser.withinNestable(AztecParser.kt:386)
at org.wordpress.aztec.AztecParser.withinHtml(AztecParser.kt:359)
at org.wordpress.aztec.AztecParser.withinHtml(AztecParser.kt:311)
at org.wordpress.aztec.AztecParser.toHtml(AztecParser.kt:92)
at org.wordpress.aztec.AztecText.toPlainHtml(AztecText.kt:1142)
at org.wordpress.aztec.AztecText.toHtml(AztecText.kt:1105)
at org.wordpress.aztec.AztecText.toHtml$default(AztecText.kt:1104)
at org.wordpress.aztec.AztecText.toFormattedHtml(AztecText.kt:1146)
at org.wordpress.aztec.History.beforeTextChanged(History.kt:42)
at org.wordpress.aztec.AztecText$addHistoryLoggingWatcher$historyLoggingWatcher$1.beforeTextChanged(AztecText.kt:480)
at android.widget.TextView.sendBeforeTextChanged(TextView.java:9144)
at android.widget.TextView.access$1600(TextView.java:326)
at android.widget.TextView$ChangeWatcher.beforeTextChanged(TextView.java:11954)
at android.text.SpannableStringBuilder.sendBeforeTextChanged(SpannableStringBuilder.java:1027)
at android.text.SpannableStringBuilder.replace(SpannableStringBuilder.java:523)
at android.text.SpannableStringBuilder.replace(SpannableStringBuilder.java:494)
at android.text.SpannableStringBuilder.replace(SpannableStringBuilder.java:34)
at android.view.inputmethod.BaseInputConnection.replaceText(BaseInputConnection.java:691)
at android.view.inputmethod.BaseInputConnection.setComposingText(BaseInputConnection.java:447)
at com.android.internal.view.IInputConnectionWrapper.executeMessage(IInputConnectionWrapper.java:340)
at com.android.internal.view.IInputConnectionWrapper$MyHandler.handleMessage(IInputConnectionWrapper.java:78)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:158)
at android.app.ActivityThread.main(ActivityThread.java:7230)
at java.lang.reflect.Method.invoke(Method.java)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120)
```
Internal Ref: 5a8423408cb3c2fa6302badc
|
priority
|
oob crash in aztecattributes setvalue removeattribute fatal exception java lang arrayindexoutofboundsexception length index at org xml sax helpers attributesimpl addattribute attributesimpl java at org wordpress aztec aztecattributes setvalue aztecattributes kt at org wordpress aztec source cssstyleformatter companion addstyleattribute cssstyleformatter kt at org wordpress aztec aztecparser withinnestable aztecparser kt at org wordpress aztec aztecparser withinhtml aztecparser kt at org wordpress aztec aztecparser withinhtml aztecparser kt at org wordpress aztec aztecparser tohtml aztecparser kt at org wordpress aztec aztectext toplainhtml aztectext kt at org wordpress aztec aztectext tohtml aztectext kt at org wordpress aztec aztectext tohtml default aztectext kt at org wordpress aztec aztectext toformattedhtml aztectext kt at org wordpress aztec history beforetextchanged history kt at org wordpress aztec aztectext addhistoryloggingwatcher historyloggingwatcher beforetextchanged aztectext kt at android widget textview sendbeforetextchanged textview java at android widget textview access textview java at android widget textview changewatcher beforetextchanged textview java at android text spannablestringbuilder sendbeforetextchanged spannablestringbuilder java at android text spannablestringbuilder replace spannablestringbuilder java at android text spannablestringbuilder replace spannablestringbuilder java at android text spannablestringbuilder replace spannablestringbuilder java at android view inputmethod baseinputconnection replacetext baseinputconnection java at android view inputmethod baseinputconnection setcomposingtext baseinputconnection java at com android internal view iinputconnectionwrapper executemessage iinputconnectionwrapper java at com android internal view iinputconnectionwrapper myhandler handlemessage iinputconnectionwrapper java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke method java at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java internal ref
| 1
|
743,324
| 25,894,914,435
|
IssuesEvent
|
2022-12-14 21:26:19
|
vscentrum/vsc-software-stack
|
https://api.github.com/repos/vscentrum/vsc-software-stack
|
closed
|
wandb
|
difficulty: easy priority: high Python update site:ugent
|
* link to support ticket: [#2022110960000685](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=100224)
* website: https://wandb.ai/site
* installation docs: https://docs.wandb.ai/quickstart
* toolchain: `foss/2021a`
* easyblock to use: `PythonBundle`
* required dependencies:
* see https://github.com/wandb/wandb/blob/main/requirements.txt
* notes:
* ...
* effort: *(TBD)*
|
1.0
|
wandb - * link to support ticket: [#2022110960000685](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=100224)
* website: https://wandb.ai/site
* installation docs: https://docs.wandb.ai/quickstart
* toolchain: `foss/2021a`
* easyblock to use: `PythonBundle`
* required dependencies:
* see https://github.com/wandb/wandb/blob/main/requirements.txt
* notes:
* ...
* effort: *(TBD)*
|
priority
|
wandb link to support ticket website installation docs toolchain foss easyblock to use pythonbundle required dependencies see notes effort tbd
| 1
|
95,493
| 3,952,398,249
|
IssuesEvent
|
2016-04-29 08:39:02
|
marvinlabs/customer-area
|
https://api.github.com/repos/marvinlabs/customer-area
|
opened
|
Update private file functions for new ownership system
|
bug Priority - high
|
$po_addon->save_post_owners($post_id, $owner['ids'], $owner['type']);
is not correct anymore
|
1.0
|
Update private file functions for new ownership system -
$po_addon->save_post_owners($post_id, $owner['ids'], $owner['type']);
is not correct anymore
|
priority
|
update private file functions for new ownership system po addon save post owners post id owner owner is not correct anymore
| 1
|
61,379
| 3,145,143,807
|
IssuesEvent
|
2015-09-14 16:33:56
|
ceylon/ceylon-ide-eclipse
|
https://api.github.com/repos/ceylon/ceylon-ide-eclipse
|
closed
|
Replace the uses of `CeylonBuilder.getFile()` by calls to the right specialized units
|
high priority improvement IN PROGRESS
|
mainly use `ModifiableSourceFile.getResourceFile()` or `ModifiablePhasedUnit.getResourceFile()`
|
1.0
|
Replace the uses of `CeylonBuilder.getFile()` by calls to the right specialized units - mainly use `ModifiableSourceFile.getResourceFile()` or `ModifiablePhasedUnit.getResourceFile()`
|
priority
|
replace the uses of ceylonbuilder getfile by calls to the right specialized units mainly use modifiablesourcefile getresourcefile or modifiablephasedunit getresourcefile
| 1
|
715,570
| 24,604,478,161
|
IssuesEvent
|
2022-10-14 15:02:29
|
0xPolygonHermez/zkevm-bridge-ui
|
https://api.github.com/repos/0xPolygonHermez/zkevm-bridge-ui
|
opened
|
Switching network disconnects the user from the bridge
|
priority: high type: bug
|
## Summary of Bug
With a new fresh MetaMask account, if you try to finalise your first bridge from L1 to L2 you are being redirected to the "Login" page when you accept to switch your network to the zkEVM one.
### Steps to Reproduce
1. Create a new account in MetaMask.
2. Transfer some Goerli ETH from an old account to the new one.
3. Do a Bridge from L1 to L2.
4. Wait until you are able to finalise it (claim it).
5. Click on the "Finalise" button, and when you are prompt to switch your network to the "zkEVM" one, click on "Accept".
|
1.0
|
Switching network disconnects the user from the bridge - ## Summary of Bug
With a new fresh MetaMask account, if you try to finalise your first bridge from L1 to L2 you are being redirected to the "Login" page when you accept to switch your network to the zkEVM one.
### Steps to Reproduce
1. Create a new account in MetaMask.
2. Transfer some Goerli ETH from an old account to the new one.
3. Do a Bridge from L1 to L2.
4. Wait until you are able to finalise it (claim it).
5. Click on the "Finalise" button, and when you are prompt to switch your network to the "zkEVM" one, click on "Accept".
|
priority
|
switching network disconnects the user from the bridge summary of bug with a new fresh metamask account if you try to finalise your first bridge from to you are being redirected to the login page when you accept to switch your network to the zkevm one steps to reproduce create a new account in metamask transfer some goerli eth from an old account to the new one do a bridge from to wait until you are able to finalise it claim it click on the finalise button and when you are prompt to switch your network to the zkevm one click on accept
| 1
|
639,651
| 20,761,029,392
|
IssuesEvent
|
2022-03-15 16:10:00
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Error on data entry using Polygon for geolocate
|
Priority-High (Needed for work) Bug
|
Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Describe the bug**
We attempted to save a new catalog record on which we had used a polygon as the spatial data type

This is the error message

**To Reproduce**
Select Polygon. Create polygon. Save to record. Save locality. Try to save as a new record
We didn't expect to see any error or coordinates as those are from the polygon per [4259](https://github.com/ArctosDB/arctos/issues/4259)
Is this a bug or do we need to do something differently during data entry? We haven't had any problems switching from point-radius to polygon in existing records. This is the first time we've tried it during data entry.
|
1.0
|
Error on data entry using Polygon for geolocate - Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Describe the bug**
We attempted to save a new catalog record on which we had used a polygon as the spatial data type

This is the error message

**To Reproduce**
Select Polygon. Create polygon. Save to record. Save locality. Try to save as a new record
We didn't expect to see any error or coordinates as those are from the polygon per [4259](https://github.com/ArctosDB/arctos/issues/4259)
Is this a bug or do we need to do something differently during data entry? We haven't had any problems switching from point-radius to polygon in existing records. This is the first time we've tried it during data entry.
|
priority
|
error on data entry using polygon for geolocate issue documentation is describe the bug we attempted to save a new catalog record on which we had used a polygon as the spatial data type this is the error message to reproduce select polygon create polygon save to record save locality try to save as a new record we didn t expect to see any error or coordinates as those are from the polygon per is this a bug or do we need to do something differently during data entry we haven t had any problems switching from point radius to polygon in existing records this is the first time we ve tried it during data entry
| 1
|
439,445
| 12,682,821,529
|
IssuesEvent
|
2020-06-19 18:15:35
|
Tyresius92/le-chat
|
https://api.github.com/repos/Tyresius92/le-chat
|
closed
|
[BUG] Change master branch to main branch
|
Priority: High Status: Complete Type: Enhancement/Feature Type: Maintenance
|
**What pain are you feeling that this feature would address?**
Race inequity in America
**Describe the solution you'd like**
Change the name of `master` branch to `main`. Additionally, verify that CI pipelines aren't explicitly looking for `master`. Finally, make sure that GitHub itself recognizes `main` as the main branch.
**Describe alternatives you've considered**
Could use `latest` or something, but `main` has the added perk of `git checkout ma` + TAB muscle memory stuff.
|
1.0
|
[BUG] Change master branch to main branch - **What pain are you feeling that this feature would address?**
Race inequity in America
**Describe the solution you'd like**
Change the name of `master` branch to `main`. Additionally, verify that CI pipelines aren't explicitly looking for `master`. Finally, make sure that GitHub itself recognizes `main` as the main branch.
**Describe alternatives you've considered**
Could use `latest` or something, but `main` has the added perk of `git checkout ma` + TAB muscle memory stuff.
|
priority
|
change master branch to main branch what pain are you feeling that this feature would address race inequity in america describe the solution you d like change the name of master branch to main additionally verify that ci pipelines aren t explicitly looking for master finally make sure that github itself recognizes main as the main branch describe alternatives you ve considered could use latest or something but main has the added perk of git checkout ma tab muscle memory stuff
| 1
|
105,442
| 4,235,918,299
|
IssuesEvent
|
2016-07-05 16:39:58
|
dmusican/Elegit
|
https://api.github.com/repos/dmusican/Elegit
|
opened
|
Speedup switching repos
|
enhancement priority high
|
Switching from a small repo to a big one is slow and unresponsive. Add in the tricks that we're using when cloning a repo.
|
1.0
|
Speedup switching repos - Switching from a small repo to a big one is slow and unresponsive. Add in the tricks that we're using when cloning a repo.
|
priority
|
speedup switching repos switching from a small repo to a big one is slow and unresponsive add in the tricks that we re using when cloning a repo
| 1
|
383,403
| 11,355,768,777
|
IssuesEvent
|
2020-01-24 20:52:20
|
ShabadOS/desktop
|
https://api.github.com/repos/ShabadOS/desktop
|
closed
|
Live Caption Tool Additions
|
Priority: 3 High Status: Confirmed Type: Feature/Enhancement
|
- Apply any changes to within the tool itself, so that CSS doesn't have to be copied
- Save and load templates
|
1.0
|
Live Caption Tool Additions - - Apply any changes to within the tool itself, so that CSS doesn't have to be copied
- Save and load templates
|
priority
|
live caption tool additions apply any changes to within the tool itself so that css doesn t have to be copied save and load templates
| 1
|
367,505
| 10,854,566,949
|
IssuesEvent
|
2019-11-13 16:38:44
|
woocommerce/woocommerce
|
https://api.github.com/repos/woocommerce/woocommerce
|
closed
|
Up sells default to order being desc
|
bug has pull request priority: high
|
**Describe the bug**
Up sells default to having their `$order` to be `desc` and this is not filterable, even though `$orderby` is.
Function here: https://github.com/woocommerce/woocommerce/blob/3.8.0/includes/wc-template-functions.php#L1930-L1975
This should fix it:
```
$order = apply_filters( 'woocommerce_upsells_order', isset( $args['order'] ) ? $args['order'] : $order );
```
|
1.0
|
Up sells default to order being desc - **Describe the bug**
Up sells default to having their `$order` to be `desc` and this is not filterable, even though `$orderby` is.
Function here: https://github.com/woocommerce/woocommerce/blob/3.8.0/includes/wc-template-functions.php#L1930-L1975
This should fix it:
```
$order = apply_filters( 'woocommerce_upsells_order', isset( $args['order'] ) ? $args['order'] : $order );
```
|
priority
|
up sells default to order being desc describe the bug up sells default to having their order to be desc and this is not filterable even though orderby is function here this should fix it order apply filters woocommerce upsells order isset args args order
| 1
|
430,720
| 12,464,895,829
|
IssuesEvent
|
2020-05-28 13:14:38
|
firecracker-microvm/firecracker
|
https://api.github.com/repos/firecracker-microvm/firecracker
|
closed
|
Firecracker panics cause a seccomp violation
|
Feature: Seccomp Priority: High Quality: Bug
|
Thanks @serban300 for pointing this out.
When Firecracker `panic!`s, it emits a blacklisted `SYS_mremap` which breaks the seccomp filter.
To repro, I added a dummy `panic!` in the `api_server` thread just after installing the seccomp filters:
```diff
diff --git a/api_server/src/lib.rs b/api_server/src/lib.rs
index afb8f42..5b43c7c 100644
--- a/api_server/src/lib.rs
+++ b/api_server/src/lib.rs
@@ -149,6 +149,8 @@ impl ApiServer {
);
}
+ panic!("Terminating");
+
// This runs forever, unless an error is returned somewhere within f (but nothing happens
```
```bash
target/x86_64-unknown-linux-musl/debug/firecracker --api-sock /tmp/a.sock
2019-05-10T11:13:12.182703303 [anonymous-instance:ERROR:src/main.rs:55] Firecracker panicked at 'Terminating', api_server/src/lib.rs:152:9
2019-05-10T11:13:12.307724144 [anonymous-instance:ERROR:vmm/src/sigsys_handler.rs:69] Shutting down VM after intercepting a bad syscall (25).
2019-05-10T11:13:12.307818289 [anonymous-instance:ERROR:vmm/src/sigsys_handler.rs:75] Failed to log metrics while stopping: Logger was not initialized.
```
The `mremap` originates in `libbacktrace`:
```rust
#0 __mremap (old_addr=0x7ffff7db9000, old_len=233472, new_len=new_len@entry=237568, flags=flags@entry=1) at src/mman/mremap.c:14
#1 0x0000000000a2784c in realloc (p=0x7ffff7db9020, n=234656) at src/malloc/malloc.c:397
#2 0x00000000009dca63 in __rbt_backtrace_vector_grow ()
#3 0x00000000009dddab in add_unit_addr ()
#4 0x00000000009de931 in add_unit_ranges ()
#5 0x00000000009decb6 in find_address_ranges ()
#6 0x00000000009df23d in build_address_map ()
#7 0x00000000009e2309 in build_dwarf_data ()
#8 0x00000000009e24da in __rbt_backtrace_dwarf_add ()
#9 0x00000000009dc3bf in elf_add ()
#10 0x00000000009dc7e5 in __rbt_backtrace_initialize ()
#11 0x00000000009d708a in fileline_initialize ()
#12 0x00000000009d7135 in __rbt_backtrace_pcinfo ()
#13 0x00000000009d56ce in backtrace::symbolize::libbacktrace::resolve::h746412056539717c (symaddr=0x9d23ac <backtrace::backtrace::trace_unsynchronized::h2c34a021aa0e95d7+44>, cb=...)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/symbolize/libbacktrace.rs:171
#14 0x00000000009cd498 in backtrace::symbolize::resolve_unsynchronized::hb7eccc81fdd0039a (addr=0x9d23ac <backtrace::backtrace::trace_unsynchronized::h2c34a021aa0e95d7+44>, cb=...)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/symbolize/mod.rs:62
#15 0x00000000009cd460 in backtrace::symbolize::resolve::h53981fa6789bc746 (addr=0x9d23ac <backtrace::backtrace::trace_unsynchronized::h2c34a021aa0e95d7+44>, cb=...)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/symbolize/mod.rs:51
#16 0x00000000009c84de in backtrace::capture::Backtrace::resolve::h748b5356e9e44001 (self=0x7fffffff7db8)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/capture.rs:146
#17 0x00000000009c8050 in backtrace::capture::Backtrace::new::h622dc69a5dec5128 ()
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/capture.rs:72
#18 0x0000000000400f84 in firecracker::main::_$u7b$$u7b$closure$u7d$$u7d$::hdc43da8746fa8ee1 (info=0x7fffffff80a8) at src/main.rs:59
#19 0x0000000000a135ea in std::panicking::rust_panic_with_hook::h28b9ce6fa7a5033b () at src/libstd/panicking.rs:495
#20 0x00000000006132b8 in std::panicking::begin_panic::hc75a354ae6cfa9ae (msg=..., file_line_col=0xd21b18) at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/panicking.rs:425
#21 0x00000000004068d4 in api_server::ApiServer::bind_and_run::h4ab39ac0c86ac506 (self=0x7fffffffd8e0, path=..., start_time_us=..., start_time_cpu_us=..., seccomp_level=2)
at api_server/src/lib.rs:152
#22 0x0000000000403e5f in firecracker::main::h28360df3fa9d0378 () at src/main.rs:168
#23 0x0000000000402a10 in std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h3e2f7377df0eeb37 () at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/rt.rs:74
#24 0x0000000000a12f13 in std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::hbd394198e0f45efb () at src/libstd/rt.rs:59
#25 std::panicking::try::do_call::hf93a787b72e1d226 () at src/libstd/panicking.rs:310
#26 0x0000000000a1e059 in __rust_maybe_catch_panic () at src/libpanic_abort/lib.rs:39
#27 0x0000000000a138da in std::panicking::try::h9b83fe1076812e50 () at src/libstd/panicking.rs:289
#28 std::panic::catch_unwind::h8a94b67bdbd8163d () at src/libstd/panic.rs:398
#29 std::rt::lang_start_internal::h7b3bd8c78881c37d () at src/libstd/rt.rs:58
#30 0x00000000004029e9 in std::rt::lang_start::h0fdb6015f3270167 (main=0x4031e0 <firecracker::main::h28360df3fa9d0378>, argc=3, argv=0x7fffffffdd28)
at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/rt.rs:74
#31 0x00000000004042ca in main ()
```
The `panic!` terminates the process anyway, but the backtrace is lost because of the seccomp issue/
|
1.0
|
Firecracker panics cause a seccomp violation - Thanks @serban300 for pointing this out.
When Firecracker `panic!`s, it emits a blacklisted `SYS_mremap` which breaks the seccomp filter.
To repro, I added a dummy `panic!` in the `api_server` thread just after installing the seccomp filters:
```diff
diff --git a/api_server/src/lib.rs b/api_server/src/lib.rs
index afb8f42..5b43c7c 100644
--- a/api_server/src/lib.rs
+++ b/api_server/src/lib.rs
@@ -149,6 +149,8 @@ impl ApiServer {
);
}
+ panic!("Terminating");
+
// This runs forever, unless an error is returned somewhere within f (but nothing happens
```
```bash
target/x86_64-unknown-linux-musl/debug/firecracker --api-sock /tmp/a.sock
2019-05-10T11:13:12.182703303 [anonymous-instance:ERROR:src/main.rs:55] Firecracker panicked at 'Terminating', api_server/src/lib.rs:152:9
2019-05-10T11:13:12.307724144 [anonymous-instance:ERROR:vmm/src/sigsys_handler.rs:69] Shutting down VM after intercepting a bad syscall (25).
2019-05-10T11:13:12.307818289 [anonymous-instance:ERROR:vmm/src/sigsys_handler.rs:75] Failed to log metrics while stopping: Logger was not initialized.
```
The `mremap` originates in `libbacktrace`:
```rust
#0 __mremap (old_addr=0x7ffff7db9000, old_len=233472, new_len=new_len@entry=237568, flags=flags@entry=1) at src/mman/mremap.c:14
#1 0x0000000000a2784c in realloc (p=0x7ffff7db9020, n=234656) at src/malloc/malloc.c:397
#2 0x00000000009dca63 in __rbt_backtrace_vector_grow ()
#3 0x00000000009dddab in add_unit_addr ()
#4 0x00000000009de931 in add_unit_ranges ()
#5 0x00000000009decb6 in find_address_ranges ()
#6 0x00000000009df23d in build_address_map ()
#7 0x00000000009e2309 in build_dwarf_data ()
#8 0x00000000009e24da in __rbt_backtrace_dwarf_add ()
#9 0x00000000009dc3bf in elf_add ()
#10 0x00000000009dc7e5 in __rbt_backtrace_initialize ()
#11 0x00000000009d708a in fileline_initialize ()
#12 0x00000000009d7135 in __rbt_backtrace_pcinfo ()
#13 0x00000000009d56ce in backtrace::symbolize::libbacktrace::resolve::h746412056539717c (symaddr=0x9d23ac <backtrace::backtrace::trace_unsynchronized::h2c34a021aa0e95d7+44>, cb=...)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/symbolize/libbacktrace.rs:171
#14 0x00000000009cd498 in backtrace::symbolize::resolve_unsynchronized::hb7eccc81fdd0039a (addr=0x9d23ac <backtrace::backtrace::trace_unsynchronized::h2c34a021aa0e95d7+44>, cb=...)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/symbolize/mod.rs:62
#15 0x00000000009cd460 in backtrace::symbolize::resolve::h53981fa6789bc746 (addr=0x9d23ac <backtrace::backtrace::trace_unsynchronized::h2c34a021aa0e95d7+44>, cb=...)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/symbolize/mod.rs:51
#16 0x00000000009c84de in backtrace::capture::Backtrace::resolve::h748b5356e9e44001 (self=0x7fffffff7db8)
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/capture.rs:146
#17 0x00000000009c8050 in backtrace::capture::Backtrace::new::h622dc69a5dec5128 ()
at $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.15/src/capture.rs:72
#18 0x0000000000400f84 in firecracker::main::_$u7b$$u7b$closure$u7d$$u7d$::hdc43da8746fa8ee1 (info=0x7fffffff80a8) at src/main.rs:59
#19 0x0000000000a135ea in std::panicking::rust_panic_with_hook::h28b9ce6fa7a5033b () at src/libstd/panicking.rs:495
#20 0x00000000006132b8 in std::panicking::begin_panic::hc75a354ae6cfa9ae (msg=..., file_line_col=0xd21b18) at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/panicking.rs:425
#21 0x00000000004068d4 in api_server::ApiServer::bind_and_run::h4ab39ac0c86ac506 (self=0x7fffffffd8e0, path=..., start_time_us=..., start_time_cpu_us=..., seccomp_level=2)
at api_server/src/lib.rs:152
#22 0x0000000000403e5f in firecracker::main::h28360df3fa9d0378 () at src/main.rs:168
#23 0x0000000000402a10 in std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h3e2f7377df0eeb37 () at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/rt.rs:74
#24 0x0000000000a12f13 in std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::hbd394198e0f45efb () at src/libstd/rt.rs:59
#25 std::panicking::try::do_call::hf93a787b72e1d226 () at src/libstd/panicking.rs:310
#26 0x0000000000a1e059 in __rust_maybe_catch_panic () at src/libpanic_abort/lib.rs:39
#27 0x0000000000a138da in std::panicking::try::h9b83fe1076812e50 () at src/libstd/panicking.rs:289
#28 std::panic::catch_unwind::h8a94b67bdbd8163d () at src/libstd/panic.rs:398
#29 std::rt::lang_start_internal::h7b3bd8c78881c37d () at src/libstd/rt.rs:58
#30 0x00000000004029e9 in std::rt::lang_start::h0fdb6015f3270167 (main=0x4031e0 <firecracker::main::h28360df3fa9d0378>, argc=3, argv=0x7fffffffdd28)
at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/rt.rs:74
#31 0x00000000004042ca in main ()
```
The `panic!` terminates the process anyway, but the backtrace is lost because of the seccomp issue/
|
priority
|
firecracker panics cause a seccomp violation thanks for pointing this out when firecracker panic s it emits a blacklisted sys mremap which breaks the seccomp filter to repro i added a dummy panic in the api server thread just after installing the seccomp filters diff diff git a api server src lib rs b api server src lib rs index a api server src lib rs b api server src lib rs impl apiserver panic terminating this runs forever unless an error is returned somewhere within f but nothing happens bash target unknown linux musl debug firecracker api sock tmp a sock firecracker panicked at terminating api server src lib rs shutting down vm after intercepting a bad syscall failed to log metrics while stopping logger was not initialized the mremap originates in libbacktrace rust mremap old addr old len new len new len entry flags flags entry at src mman mremap c in realloc p n at src malloc malloc c in rbt backtrace vector grow in add unit addr in add unit ranges in find address ranges in build address map in build dwarf data in rbt backtrace dwarf add in elf add in rbt backtrace initialize in fileline initialize in rbt backtrace pcinfo in backtrace symbolize libbacktrace resolve symaddr cb at home cargo registry src github com backtrace src symbolize libbacktrace rs in backtrace symbolize resolve unsynchronized addr cb at home cargo registry src github com backtrace src symbolize mod rs in backtrace symbolize resolve addr cb at home cargo registry src github com backtrace src symbolize mod rs in backtrace capture backtrace resolve self at home cargo registry src github com backtrace src capture rs in backtrace capture backtrace new at home cargo registry src github com backtrace src capture rs in firecracker main closure info at src main rs in std panicking rust panic with hook at src libstd panicking rs in std panicking begin panic msg file line col at rustc src libstd panicking rs in api server apiserver bind and run self path start time us start time cpu us seccomp level at api server src lib rs in firecracker main at src main rs in std rt lang start closure at rustc src libstd rt rs in std rt lang start internal closure at src libstd rt rs std panicking try do call at src libstd panicking rs in rust maybe catch panic at src libpanic abort lib rs in std panicking try at src libstd panicking rs std panic catch unwind at src libstd panic rs std rt lang start internal at src libstd rt rs in std rt lang start main argc argv at rustc src libstd rt rs in main the panic terminates the process anyway but the backtrace is lost because of the seccomp issue
| 1
|
556,887
| 16,493,933,623
|
IssuesEvent
|
2021-05-25 08:13:09
|
opensrp/fhircore
|
https://api.github.com/repos/opensrp/fhircore
|
closed
|
Barcode issue and search in covax use case
|
covid high-priority
|
UPDATED 5.14
This issue has been updated to describe the needed workflows for the covax demo. Please note @f-odhiambo that the WHO confirmed today that this will be a barcode, not a QR code.
We need to build two barcode workflows:
1. Patient registration
2. Patient search
~~@rowo will add visuals in the comments to guide the UI~~ EDIT: Roger added images below and here's a link to clickthrough: https://www.figma.com/proto/Ck46ofqC6kE6ISBC2YmwiI/COVID19-FHIR?page-id=1001%3A5305&node-id=1002%3A5506&viewport=699%2C624%2C0.25&scaling=min-zoom
**Workflow 1: Registration**
1. Hit "register new client"
2. Open registration form
3. Scan barcode button will be the top of the form
4. Scan barcode
5. When successfully scanned, show confirmation in form field
<img src="https://user-images.githubusercontent.com/1584163/118430679-9b379380-b6a2-11eb-88b0-1df78d2bba43.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430684-9f63b100-b6a2-11eb-879a-82bf5cda077e.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430688-a12d7480-b6a2-11eb-87de-a5b5aa96ee99.png" width="200">
**Workflow 2: Patient Search Happy Path**
1. User is on home view
2. Hit search by barcode code button
3. Launches barcode scanner (searches locally and the server when online - for the demo, we could just build local search if that saves time)
4. Successful scan results in the profile view with the patient profile
[This is the same for both online and offline scenarios]
<img src="https://user-images.githubusercontent.com/1584163/118430679-9b379380-b6a2-11eb-88b0-1df78d2bba43.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430865-0d0fdd00-b6a3-11eb-9da0-87b7216db6c5.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430870-11d49100-b6a3-11eb-82f5-7763088dbc23.png" width="200">
---------
NOT for demo (but needs to be built after)
**Workflow 3: Patient Search Not Found Path**
[Steps 1-3 above]
4. App is OFFLINE (aka the expectation is that the patient has been registered elsewhere): Unsuccessful message offering ways forward
5. Hit record vaccine
6. Open vaccination form
Though it is possible for there to be a scenario where the app is online and the barcode scan still does not return a result, this would be an edge case. We do not need to accommodate this in the short term.
|
1.0
|
Barcode issue and search in covax use case - UPDATED 5.14
This issue has been updated to describe the needed workflows for the covax demo. Please note @f-odhiambo that the WHO confirmed today that this will be a barcode, not a QR code.
We need to build two barcode workflows:
1. Patient registration
2. Patient search
~~@rowo will add visuals in the comments to guide the UI~~ EDIT: Roger added images below and here's a link to clickthrough: https://www.figma.com/proto/Ck46ofqC6kE6ISBC2YmwiI/COVID19-FHIR?page-id=1001%3A5305&node-id=1002%3A5506&viewport=699%2C624%2C0.25&scaling=min-zoom
**Workflow 1: Registration**
1. Hit "register new client"
2. Open registration form
3. Scan barcode button will be the top of the form
4. Scan barcode
5. When successfully scanned, show confirmation in form field
<img src="https://user-images.githubusercontent.com/1584163/118430679-9b379380-b6a2-11eb-88b0-1df78d2bba43.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430684-9f63b100-b6a2-11eb-879a-82bf5cda077e.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430688-a12d7480-b6a2-11eb-87de-a5b5aa96ee99.png" width="200">
**Workflow 2: Patient Search Happy Path**
1. User is on home view
2. Hit search by barcode code button
3. Launches barcode scanner (searches locally and the server when online - for the demo, we could just build local search if that saves time)
4. Successful scan results in the profile view with the patient profile
[This is the same for both online and offline scenarios]
<img src="https://user-images.githubusercontent.com/1584163/118430679-9b379380-b6a2-11eb-88b0-1df78d2bba43.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430865-0d0fdd00-b6a3-11eb-9da0-87b7216db6c5.png" width="200">
<img src="https://user-images.githubusercontent.com/1584163/118430870-11d49100-b6a3-11eb-82f5-7763088dbc23.png" width="200">
---------
NOT for demo (but needs to be built after)
**Workflow 3: Patient Search Not Found Path**
[Steps 1-3 above]
4. App is OFFLINE (aka the expectation is that the patient has been registered elsewhere): Unsuccessful message offering ways forward
5. Hit record vaccine
6. Open vaccination form
Though it is possible for there to be a scenario where the app is online and the barcode scan still does not return a result, this would be an edge case. We do not need to accommodate this in the short term.
|
priority
|
barcode issue and search in covax use case updated this issue has been updated to describe the needed workflows for the covax demo please note f odhiambo that the who confirmed today that this will be a barcode not a qr code we need to build two barcode workflows patient registration patient search rowo will add visuals in the comments to guide the ui edit roger added images below and here s a link to clickthrough workflow registration hit register new client open registration form scan barcode button will be the top of the form scan barcode when successfully scanned show confirmation in form field workflow patient search happy path user is on home view hit search by barcode code button launches barcode scanner searches locally and the server when online for the demo we could just build local search if that saves time successful scan results in the profile view with the patient profile not for demo but needs to be built after workflow patient search not found path app is offline aka the expectation is that the patient has been registered elsewhere unsuccessful message offering ways forward hit record vaccine open vaccination form though it is possible for there to be a scenario where the app is online and the barcode scan still does not return a result this would be an edge case we do not need to accommodate this in the short term
| 1
|
554,003
| 16,387,118,869
|
IssuesEvent
|
2021-05-17 11:59:00
|
primefaces/primevue
|
https://api.github.com/repos/primefaces/primevue
|
closed
|
Templating for Menus
|
enhancement priority - high
|
Create a template for menu components so that users can display custom content.
|
1.0
|
Templating for Menus - Create a template for menu components so that users can display custom content.
|
priority
|
templating for menus create a template for menu components so that users can display custom content
| 1
|
445,870
| 12,837,453,292
|
IssuesEvent
|
2020-07-07 15:48:39
|
aces/cbrain
|
https://api.github.com/repos/aces/cbrain
|
opened
|
Double check the tar command's success when archiving tasks
|
Bug Priority: High
|
The tar command that creates archives for tasks is checked for error messages in stdout and stderr, but the return status is not check because some tar will return false even when there are minor warnings.
Tar command:
https://github.com/aces/cbrain/blob/60c7dead132dfce31eff37c021a1d4bf9b219174/BrainPortal/app/models/cluster_task.rb#L1416
Commented out code for checking the return status:
https://github.com/aces/cbrain/blob/60c7dead132dfce31eff37c021a1d4bf9b219174/BrainPortal/app/models/cluster_task.rb#L1438
However, if the tar command is killed with a signal, tar does not print any messages at all, so the system has no way of knowing that the tar command did not finish properly. The return code is false, but that is not checked (again, because sometimes we are ok with some warnings). This results in truncated archived being stored in CBRAIN.
What we must do is check the signal information stored in $? (as a Process::Status object) just after we system() the tar command.
|
1.0
|
Double check the tar command's success when archiving tasks - The tar command that creates archives for tasks is checked for error messages in stdout and stderr, but the return status is not check because some tar will return false even when there are minor warnings.
Tar command:
https://github.com/aces/cbrain/blob/60c7dead132dfce31eff37c021a1d4bf9b219174/BrainPortal/app/models/cluster_task.rb#L1416
Commented out code for checking the return status:
https://github.com/aces/cbrain/blob/60c7dead132dfce31eff37c021a1d4bf9b219174/BrainPortal/app/models/cluster_task.rb#L1438
However, if the tar command is killed with a signal, tar does not print any messages at all, so the system has no way of knowing that the tar command did not finish properly. The return code is false, but that is not checked (again, because sometimes we are ok with some warnings). This results in truncated archived being stored in CBRAIN.
What we must do is check the signal information stored in $? (as a Process::Status object) just after we system() the tar command.
|
priority
|
double check the tar command s success when archiving tasks the tar command that creates archives for tasks is checked for error messages in stdout and stderr but the return status is not check because some tar will return false even when there are minor warnings tar command commented out code for checking the return status however if the tar command is killed with a signal tar does not print any messages at all so the system has no way of knowing that the tar command did not finish properly the return code is false but that is not checked again because sometimes we are ok with some warnings this results in truncated archived being stored in cbrain what we must do is check the signal information stored in as a process status object just after we system the tar command
| 1
|
531,535
| 15,499,998,298
|
IssuesEvent
|
2021-03-11 08:45:14
|
Thorium-Sim/thorium
|
https://api.github.com/repos/Thorium-Sim/thorium
|
opened
|
Looping Fade in Fade Out for Lights on DMX
|
priority/high type/feature
|
### Requested By: Bracken
### Priority: High
### Version: 3.3.1
Possible?
|
1.0
|
Looping Fade in Fade Out for Lights on DMX - ### Requested By: Bracken
### Priority: High
### Version: 3.3.1
Possible?
|
priority
|
looping fade in fade out for lights on dmx requested by bracken priority high version possible
| 1
|
498,206
| 14,403,329,554
|
IssuesEvent
|
2020-12-03 15:55:49
|
parkourtheory-admin/datapipe
|
https://api.github.com/repos/parkourtheory-admin/datapipe
|
opened
|
Task for percentage of nodes of each graph
|
easy focus high priority
|
Bar plot for frequency of nodes that occur in each graph
|
1.0
|
Task for percentage of nodes of each graph - Bar plot for frequency of nodes that occur in each graph
|
priority
|
task for percentage of nodes of each graph bar plot for frequency of nodes that occur in each graph
| 1
|
552,525
| 16,242,564,515
|
IssuesEvent
|
2021-05-07 11:20:00
|
InteractiveFaultLocalization/iFL4Eclipse
|
https://api.github.com/repos/InteractiveFaultLocalization/iFL4Eclipse
|
closed
|
Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench", org.eclipse.swt.SWTException: Widget is disposed
|
bug high priority
|
**Precondition**
* .../eclipse.exe has launched.
* iFL plugin has been installed.
**Steps**
1. Delete eclipse.
1. Reinstall eclipse.
1. Start eclipse with the usually used workspace.
1. Press iFL button.
1. Close eclipse.
**Expected results**
* After reinstall eclipse, iFL should be work properly.
**Received results**
```
!MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench".
!STACK 0
org.eclipse.swt.SWTException: Widget is disposed
```
* Log file:
[exit_error_log.txt](https://github.com/InteractiveFaultLocalization/iFL4Eclipse/files/6286179/exit_error_log.txt)
**Related issue:**
* #115 After reinstall and start eclipse, plenty of error occurs in log: java.lang.RuntimeException and org.eclipse.swt.SWTException
**Environment:**
* Package: https://github.com/sed-szeged/iFL4Eclipse/releases/tag/V2.duallist-sorting.3
* Operating System: Windows 10 Pro, 64 bit
* Eclipse version: 2019-09 R (4.13.0)
|
1.0
|
Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench", org.eclipse.swt.SWTException: Widget is disposed - **Precondition**
* .../eclipse.exe has launched.
* iFL plugin has been installed.
**Steps**
1. Delete eclipse.
1. Reinstall eclipse.
1. Start eclipse with the usually used workspace.
1. Press iFL button.
1. Close eclipse.
**Expected results**
* After reinstall eclipse, iFL should be work properly.
**Received results**
```
!MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench".
!STACK 0
org.eclipse.swt.SWTException: Widget is disposed
```
* Log file:
[exit_error_log.txt](https://github.com/InteractiveFaultLocalization/iFL4Eclipse/files/6286179/exit_error_log.txt)
**Related issue:**
* #115 After reinstall and start eclipse, plenty of error occurs in log: java.lang.RuntimeException and org.eclipse.swt.SWTException
**Environment:**
* Package: https://github.com/sed-szeged/iFL4Eclipse/releases/tag/V2.duallist-sorting.3
* Operating System: Windows 10 Pro, 64 bit
* Eclipse version: 2019-09 R (4.13.0)
|
priority
|
problems occurred when invoking code from plug in org eclipse ui workbench org eclipse swt swtexception widget is disposed precondition eclipse exe has launched ifl plugin has been installed steps delete eclipse reinstall eclipse start eclipse with the usually used workspace press ifl button close eclipse expected results after reinstall eclipse ifl should be work properly received results message problems occurred when invoking code from plug in org eclipse ui workbench stack org eclipse swt swtexception widget is disposed log file related issue after reinstall and start eclipse plenty of error occurs in log java lang runtimeexception and org eclipse swt swtexception environment package operating system windows pro bit eclipse version r
| 1
|
158,133
| 6,022,485,889
|
IssuesEvent
|
2017-06-07 21:10:28
|
OperationCode/operationcode_frontend
|
https://api.github.com/repos/OperationCode/operationcode_frontend
|
closed
|
Create code schools page
|
Priority: High Status: In Progress Type: Feature
|
This is our most visited page behind /, and probably our biggest source of search traffic.
This page should display a list of code schools taken from `https://api.operationcode.org/api/v1/code_schools`.
VA approved code schools should be displayed up top.
Following that code schools should be listed be state.
The list should be easily navigable. It's currently implemented with drawers but we don't have to stick to that design.
|
1.0
|
Create code schools page - This is our most visited page behind /, and probably our biggest source of search traffic.
This page should display a list of code schools taken from `https://api.operationcode.org/api/v1/code_schools`.
VA approved code schools should be displayed up top.
Following that code schools should be listed be state.
The list should be easily navigable. It's currently implemented with drawers but we don't have to stick to that design.
|
priority
|
create code schools page this is our most visited page behind and probably our biggest source of search traffic this page should display a list of code schools taken from va approved code schools should be displayed up top following that code schools should be listed be state the list should be easily navigable it s currently implemented with drawers but we don t have to stick to that design
| 1
|
596,638
| 18,108,783,183
|
IssuesEvent
|
2021-09-22 23:00:17
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
QGIS crashes while opening symbology properties of geopackage with embedded style
|
High Priority Bug Crash/Data Corruption
|
### What is the bug or the crash?
I have a geopackage with embedded styles, created in the following way:
Drag-and-drop dxf > Package layers (save layer styles in geopackage) > [embedded.gpkg.zip](https://github.com/qgis/QGIS/files/7195931/embedded.gpkg.zip)
QGIS crashes when trying to open layer styling panel or properties on this layer.
### Steps to reproduce the issue
1. Load linked geopackage
2. Open layer styling panel / properties for layer Lines
3. crash
### Versions
bc60331c17 on Debian Bullseye
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
QGIS crashes while opening symbology properties of geopackage with embedded style - ### What is the bug or the crash?
I have a geopackage with embedded styles, created in the following way:
Drag-and-drop dxf > Package layers (save layer styles in geopackage) > [embedded.gpkg.zip](https://github.com/qgis/QGIS/files/7195931/embedded.gpkg.zip)
QGIS crashes when trying to open layer styling panel or properties on this layer.
### Steps to reproduce the issue
1. Load linked geopackage
2. Open layer styling panel / properties for layer Lines
3. crash
### Versions
bc60331c17 on Debian Bullseye
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
priority
|
qgis crashes while opening symbology properties of geopackage with embedded style what is the bug or the crash i have a geopackage with embedded styles created in the following way drag and drop dxf package layers save layer styles in geopackage qgis crashes when trying to open layer styling panel or properties on this layer steps to reproduce the issue load linked geopackage open layer styling panel properties for layer lines crash versions on debian bullseye supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
5,296
| 2,573,949,134
|
IssuesEvent
|
2015-02-11 14:08:51
|
pgmpy/pgmpy
|
https://api.github.com/repos/pgmpy/pgmpy
|
closed
|
Variable Elimination's values a bit off of what Samiam gives
|
High Priority
|
Check the values in our tests and for the model https://gist.github.com/ankurankan/9d776655882e7361af1f.
Also in the case of multiple variables with multiple evidence the values are much difference.
Check why this is happening..
|
1.0
|
Variable Elimination's values a bit off of what Samiam gives - Check the values in our tests and for the model https://gist.github.com/ankurankan/9d776655882e7361af1f.
Also in the case of multiple variables with multiple evidence the values are much difference.
Check why this is happening..
|
priority
|
variable elimination s values a bit off of what samiam gives check the values in our tests and for the model also in the case of multiple variables with multiple evidence the values are much difference check why this is happening
| 1
|
364,053
| 10,758,237,019
|
IssuesEvent
|
2019-10-31 14:39:51
|
vigetlabs/npm
|
https://api.github.com/repos/vigetlabs/npm
|
opened
|
Audiences page: Audience Carousel bug
|
FED High Priority Needs QA Fixes
|
Currently, image & stats are displaying beneath the carousel. We need to fix this so for browser testing so i'm capturing in its own issue.
https://npm.staging.vigetx.com/audience/
https://www.dropbox.com/s/bk5dd2vh00p83e7/Screenshot%202019-10-31%2010.38.18.png?dl=0
|
1.0
|
Audiences page: Audience Carousel bug - Currently, image & stats are displaying beneath the carousel. We need to fix this so for browser testing so i'm capturing in its own issue.
https://npm.staging.vigetx.com/audience/
https://www.dropbox.com/s/bk5dd2vh00p83e7/Screenshot%202019-10-31%2010.38.18.png?dl=0
|
priority
|
audiences page audience carousel bug currently image stats are displaying beneath the carousel we need to fix this so for browser testing so i m capturing in its own issue
| 1
|
799,247
| 28,303,033,235
|
IssuesEvent
|
2023-04-10 08:11:38
|
AY2223S2-CS2113-T11-3/tp
|
https://api.github.com/repos/AY2223S2-CS2113-T11-3/tp
|
closed
|
DG Issues
|
type.Task priority.High
|
Nice work overall!
- [x] Do finish up the Product Scope, NFRs, Glossary, and Manual Testing sections.

- [x] (Bye) Should the activation bar end here?

- [x] (List, Remove) Should the activation bars begin before the operations are invoked?

- [x] (List) Is the control not returned from `pet` before the loop ends?

- [x] (Add) Consider having the error squiggly lines not be present in the diagram - not UML standard.

- [ ] (Remove) Consider [using a reference frame](https://nus-cs2113-ay2223s2.github.io/website/schedule/week10/topics.html#tools-uml-sequence-diagrams-reference-frames) if the sequence diagram is too big?
- [x] (Remove) Could these return arrows be incomplete?

|
1.0
|
DG Issues - Nice work overall!
- [x] Do finish up the Product Scope, NFRs, Glossary, and Manual Testing sections.

- [x] (Bye) Should the activation bar end here?

- [x] (List, Remove) Should the activation bars begin before the operations are invoked?

- [x] (List) Is the control not returned from `pet` before the loop ends?

- [x] (Add) Consider having the error squiggly lines not be present in the diagram - not UML standard.

- [ ] (Remove) Consider [using a reference frame](https://nus-cs2113-ay2223s2.github.io/website/schedule/week10/topics.html#tools-uml-sequence-diagrams-reference-frames) if the sequence diagram is too big?
- [x] (Remove) Could these return arrows be incomplete?

|
priority
|
dg issues nice work overall do finish up the product scope nfrs glossary and manual testing sections bye should the activation bar end here list remove should the activation bars begin before the operations are invoked list is the control not returned from pet before the loop ends add consider having the error squiggly lines not be present in the diagram not uml standard remove consider if the sequence diagram is too big remove could these return arrows be incomplete
| 1
|
347,845
| 10,435,179,291
|
IssuesEvent
|
2019-09-17 16:42:07
|
infiniteautomation/ma-core-public
|
https://api.github.com/repos/infiniteautomation/ma-core-public
|
opened
|
Make email addresses unique for Users
|
High Priority Item
|
Mango 3.7.0 will have this feature but currently will delete all duplicates. Instead we should generate a unique email for that person.
|
1.0
|
Make email addresses unique for Users - Mango 3.7.0 will have this feature but currently will delete all duplicates. Instead we should generate a unique email for that person.
|
priority
|
make email addresses unique for users mango will have this feature but currently will delete all duplicates instead we should generate a unique email for that person
| 1
|
691,627
| 23,704,366,392
|
IssuesEvent
|
2022-08-29 22:33:44
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Investigate from_padded implementations correctness
|
high priority triaged module: nestedtensor module: correctness (silent) release notes: nested tensor
|
## Summary
A gradient test involving these ops created here: #84078 is succeeding for float64 on cuda but failing for float16 and float32 on cuda. The input sizes are small enough were floating point accumulation should not be effecting this tests.
## Env Details:
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.3
Libc version: glibc-2.27
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.27
Is CUDA available: N/A
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] torchdynamo==1.13.0.dev0
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torchdynamo 1.13.0.dev0 dev_0 <develop>
cc @ezyang @gchanan @zou3519 @cpuhrsch @jbschlosser @bhosmer @mikaylagawarecki
|
1.0
|
Investigate from_padded implementations correctness - ## Summary
A gradient test involving these ops created here: #84078 is succeeding for float64 on cuda but failing for float16 and float32 on cuda. The input sizes are small enough were floating point accumulation should not be effecting this tests.
## Env Details:
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.3
Libc version: glibc-2.27
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.27
Is CUDA available: N/A
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] torchdynamo==1.13.0.dev0
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torchdynamo 1.13.0.dev0 dev_0 <develop>
cc @ezyang @gchanan @zou3519 @cpuhrsch @jbschlosser @bhosmer @mikaylagawarecki
|
priority
|
investigate from padded implementations correctness summary a gradient test involving these ops created here is succeeding for on cuda but failing for and on cuda the input sizes are small enough were floating point accumulation should not be effecting this tests env details collecting environment information pytorch version n a is debug build n a cuda used to build pytorch n a rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version tags release final cmake version version libc version glibc python version main jun bit runtime python platform linux aws with is cuda available n a cuda runtime version gpu models and configuration gpu nvidia gpu nvidia gpu nvidia gpu nvidia gpu nvidia gpu nvidia gpu nvidia gpu nvidia nvidia driver version cudnn version probably one of the following usr local cuda targets linux lib libcudnn so usr local cuda targets linux lib libcudnn adv infer so usr local cuda targets linux lib libcudnn adv train so usr local cuda targets linux lib libcudnn cnn infer so usr local cuda targets linux lib libcudnn cnn train so usr local cuda targets linux lib libcudnn ops infer so usr local cuda targets linux lib libcudnn ops train so usr local cuda targets linux lib libcudnn so usr local cuda targets linux lib libcudnn adv infer so usr local cuda targets linux lib libcudnn adv train so usr local cuda targets linux lib libcudnn cnn infer so usr local cuda targets linux lib libcudnn cnn train so usr local cuda targets linux lib libcudnn ops infer so usr local cuda targets linux lib libcudnn ops train so usr local cuda targets linux lib libcudnn so usr local cuda targets linux lib libcudnn adv infer so usr local cuda targets linux lib libcudnn adv train so usr local cuda targets linux lib libcudnn cnn infer so usr local cuda targets linux lib libcudnn cnn train so usr local cuda targets linux lib libcudnn ops infer so usr local cuda targets linux lib libcudnn ops train so hip runtime version n a miopen runtime version n a is xnnpack available n a versions of relevant libraries mypy mypy extensions numpy torchdynamo mkl mkl include numpy pypi pypi torchdynamo dev cc ezyang gchanan cpuhrsch jbschlosser bhosmer mikaylagawarecki
| 1
|
500,044
| 14,485,157,078
|
IssuesEvent
|
2020-12-10 17:14:03
|
wazuh/wazuh-documentation
|
https://api.github.com/repos/wazuh/wazuh-documentation
|
closed
|
Unattended installation scripts improvements
|
priority: highest type: feature
|
Hello team!
This issue aims to improve the unattended installation scripts by adding a check before the installation to ensure that it is being installed in a 64bit system.
Regards,
David
|
1.0
|
Unattended installation scripts improvements - Hello team!
This issue aims to improve the unattended installation scripts by adding a check before the installation to ensure that it is being installed in a 64bit system.
Regards,
David
|
priority
|
unattended installation scripts improvements hello team this issue aims to improve the unattended installation scripts by adding a check before the installation to ensure that it is being installed in a system regards david
| 1
|
65,169
| 3,226,917,620
|
IssuesEvent
|
2015-10-10 18:28:08
|
biocore/qiita
|
https://api.github.com/repos/biocore/qiita
|
closed
|
EBI submission will fail silently if there are non-ascii characters in the templates
|
bug component: ebi GUI priority: high
|
It will raise the following error, but it won't be visible through the graphical user interface, the insdc status will still appear as 'submitting'.
```python
Traceback (most recent call last):
File "/home/qiita/qiita_main/scripts/qiita", line 486, in <module>
qiita()
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 664, in __call__
return self.main(*args, **kwargs)
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 644, in main
rv = self.invoke(ctx)
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 837, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 464, in invoke
return callback(*args, **kwargs)
File "/home/qiita/qiita_main/scripts/qiita", line 362, in submit
_submit_EBI(preprocessed_data_id, action, send, fastq_dir)
File "/home/qiita/qiita_main/qiita_ware/commands.py", line 146, in submit_EBI
submission_fp, action)
File "/home/qiita/qiita_main/qiita_ware/ebi.py", line 784, in write_all_xml_files
self.write_experiment_xml(experiment_fp)
File "/home/qiita/qiita_main/qiita_ware/ebi.py", line 725, in write_experiment_xml
'experiment_xml_fp', fp)
File "/home/qiita/qiita_main/qiita_ware/ebi.py", line 677, in _write_xml_file
xml = minidom.parseString(ET.tostring(xml_element))
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 1127, in tostring
ElementTree(element).write(file, encoding, method=method)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 821, in write
serialize(write, self._root, encoding, qnames, namespaces)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 938, in _serialize_xml
write(_escape_cdata(text, encoding))
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 1074, in _escape_cdata
return text.encode(encoding, "xmlcharrefreplace")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 578: ordinal not in range(128)
```
|
1.0
|
EBI submission will fail silently if there are non-ascii characters in the templates - It will raise the following error, but it won't be visible through the graphical user interface, the insdc status will still appear as 'submitting'.
```python
Traceback (most recent call last):
File "/home/qiita/qiita_main/scripts/qiita", line 486, in <module>
qiita()
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 664, in __call__
return self.main(*args, **kwargs)
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 644, in main
rv = self.invoke(ctx)
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 837, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/qiita/.virtualenvs/qiita/lib/python2.7/site-packages/click/core.py", line 464, in invoke
return callback(*args, **kwargs)
File "/home/qiita/qiita_main/scripts/qiita", line 362, in submit
_submit_EBI(preprocessed_data_id, action, send, fastq_dir)
File "/home/qiita/qiita_main/qiita_ware/commands.py", line 146, in submit_EBI
submission_fp, action)
File "/home/qiita/qiita_main/qiita_ware/ebi.py", line 784, in write_all_xml_files
self.write_experiment_xml(experiment_fp)
File "/home/qiita/qiita_main/qiita_ware/ebi.py", line 725, in write_experiment_xml
'experiment_xml_fp', fp)
File "/home/qiita/qiita_main/qiita_ware/ebi.py", line 677, in _write_xml_file
xml = minidom.parseString(ET.tostring(xml_element))
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 1127, in tostring
ElementTree(element).write(file, encoding, method=method)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 821, in write
serialize(write, self._root, encoding, qnames, namespaces)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 940, in _serialize_xml
_serialize_xml(write, e, encoding, qnames, None)
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 938, in _serialize_xml
write(_escape_cdata(text, encoding))
File "/opt/python-2.7.3/lib/python2.7/xml/etree/ElementTree.py", line 1074, in _escape_cdata
return text.encode(encoding, "xmlcharrefreplace")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 578: ordinal not in range(128)
```
|
priority
|
ebi submission will fail silently if there are non ascii characters in the templates it will raise the following error but it won t be visible through the graphical user interface the insdc status will still appear as submitting python traceback most recent call last file home qiita qiita main scripts qiita line in qiita file home qiita virtualenvs qiita lib site packages click core py line in call return self main args kwargs file home qiita virtualenvs qiita lib site packages click core py line in main rv self invoke ctx file home qiita virtualenvs qiita lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file home qiita virtualenvs qiita lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file home qiita virtualenvs qiita lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file home qiita virtualenvs qiita lib site packages click core py line in invoke return ctx invoke self callback ctx params file home qiita virtualenvs qiita lib site packages click core py line in invoke return callback args kwargs file home qiita qiita main scripts qiita line in submit submit ebi preprocessed data id action send fastq dir file home qiita qiita main qiita ware commands py line in submit ebi submission fp action file home qiita qiita main qiita ware ebi py line in write all xml files self write experiment xml experiment fp file home qiita qiita main qiita ware ebi py line in write experiment xml experiment xml fp fp file home qiita qiita main qiita ware ebi py line in write xml file xml minidom parsestring et tostring xml element file opt python lib xml etree elementtree py line in tostring elementtree element write file encoding method method file opt python lib xml etree elementtree py line in write serialize write self root encoding qnames namespaces file opt python lib xml etree elementtree py line in serialize xml serialize xml write e encoding qnames none file opt python lib xml etree elementtree py line in serialize xml serialize xml write e encoding qnames none file opt python lib xml etree elementtree py line in serialize xml serialize xml write e encoding qnames none file opt python lib xml etree elementtree py line in serialize xml serialize xml write e encoding qnames none file opt python lib xml etree elementtree py line in serialize xml write escape cdata text encoding file opt python lib xml etree elementtree py line in escape cdata return text encode encoding xmlcharrefreplace unicodedecodeerror ascii codec can t decode byte in position ordinal not in range
| 1
|
690,085
| 23,645,332,398
|
IssuesEvent
|
2022-08-25 21:24:39
|
inverse-inc/packetfence
|
https://api.github.com/repos/inverse-inc/packetfence
|
closed
|
Re-testing email flows with merge of #7137
|
Type: Bug Priority: High
|
These flows have to be re-tested from scratch:
* Email/Sponsor activation.
* Security event action 'email_admin'
|
1.0
|
Re-testing email flows with merge of #7137 -
These flows have to be re-tested from scratch:
* Email/Sponsor activation.
* Security event action 'email_admin'
|
priority
|
re testing email flows with merge of these flows have to be re tested from scratch email sponsor activation security event action email admin
| 1
|
665,431
| 22,318,821,039
|
IssuesEvent
|
2022-06-14 02:54:33
|
Kiyomi-Parents/Kiyomi
|
https://api.github.com/repos/Kiyomi-Parents/Kiyomi
|
opened
|
Fix interaction defer
|
Priority: High Type: Bug Status: Available Difficulty: Medium
|
Almost all commands need to defer when responding.
Need to figure out how to respond to deferred messages.
|
1.0
|
Fix interaction defer - Almost all commands need to defer when responding.
Need to figure out how to respond to deferred messages.
|
priority
|
fix interaction defer almost all commands need to defer when responding need to figure out how to respond to deferred messages
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.