Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12 values | text_combine stringlengths 96 259k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
677,529 | 23,164,381,546 | IssuesEvent | 2022-07-29 22:02:41 | CursedMC/YummyQuiltHacks | https://api.github.com/repos/CursedMC/YummyQuiltHacks | opened | ASM API Reform | enhancement help wanted medium priority rfc | ## Abstract
Currently, the ASM API is janky. We need to rework the API so we can use `MixoutPlugin`s instead of registering `TransformEvent`s.
## Alternatives
* Use the current event-based API.
## Advantages
## Drawbacks | 1.0 | ASM API Reform - ## Abstract
Currently, the ASM API is janky. We need to rework the API so we can use `MixoutPlugin`s instead of registering `TransformEvent`s.
## Alternatives
* Use the current event-based API.
## Advantages
## Drawbacks | priority | asm api reform abstract currently the asm api is janky we need to rework the api so we can use mixoutplugin s instead of registering transformevent s alternatives use the current event based api advantages drawbacks | 1 |
686,281 | 23,484,985,518 | IssuesEvent | 2022-08-17 13:46:27 | onesoft-sudo/sudobot | https://api.github.com/repos/onesoft-sudo/sudobot | closed | Log uncaught errors via webhooks | feature priority:medium non-moderation chore error-handling | Log uncaught errors via webhooks, in a channel on the home discord server. | 1.0 | Log uncaught errors via webhooks - Log uncaught errors via webhooks, in a channel on the home discord server. | priority | log uncaught errors via webhooks log uncaught errors via webhooks in a channel on the home discord server | 1 |
283,566 | 8,719,951,517 | IssuesEvent | 2018-12-08 06:37:47 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | new -v logic interferes with cli arg parsing | bug likelihood medium priority reviewed severity medium | visit nowin v 2.8.1 cli s <script>
will turn into:
visit nowin cli s <script> forceversion 2.8.1
The fact that "-forceversion 2.8.1" it is put on the end of the command line undermines argument parsing (especially for standard python module "setup.py") scripts
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2050
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: new -v logic interferes with cli arg parsing
Assigned to: Eric Brugger
Category:
Target version: 2.8.2
Author: Cyrus Harrison
Start: 10/31/2014
Due date:
% Done: 100
Estimated time: 2.0
Created: 10/31/2014 02:00 pm
Updated: 12/02/2014 12:45 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.8.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
visit -nowin -v 2.8.1 -cli -s <script>
will turn into:
visit -nowin -cli -s <script> -forceversion 2.8.1
The fact that "-forceversion 2.8.1" it is put on the end of the command line undermines argument parsing (especially for standard python module "setup.py") scripts
Comments:
I committed revisions 25060 and 25063 to the 2.8 RC and trunk with thefollowing changes:1) I modified the frontendlauncher so that arguments that it adds to the argument list are added at the beginning of the list instead of the end of the list so that argument passing for scripts works properly. This resolves #2050.M bin/frontendlauncher.pyM resources/help/en_US/relnotes2.8.2.html
| 1.0 | new -v logic interferes with cli arg parsing - visit nowin v 2.8.1 cli s <script>
will turn into:
visit nowin cli s <script> forceversion 2.8.1
The fact that "-forceversion 2.8.1" it is put on the end of the command line undermines argument parsing (especially for standard python module "setup.py") scripts
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2050
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: new -v logic interferes with cli arg parsing
Assigned to: Eric Brugger
Category:
Target version: 2.8.2
Author: Cyrus Harrison
Start: 10/31/2014
Due date:
% Done: 100
Estimated time: 2.0
Created: 10/31/2014 02:00 pm
Updated: 12/02/2014 12:45 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.8.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
visit -nowin -v 2.8.1 -cli -s <script>
will turn into:
visit -nowin -cli -s <script> -forceversion 2.8.1
The fact that "-forceversion 2.8.1" it is put on the end of the command line undermines argument parsing (especially for standard python module "setup.py") scripts
Comments:
I committed revisions 25060 and 25063 to the 2.8 RC and trunk with thefollowing changes:1) I modified the frontendlauncher so that arguments that it adds to the argument list are added at the beginning of the list instead of the end of the list so that argument passing for scripts works properly. This resolves #2050.M bin/frontendlauncher.pyM resources/help/en_US/relnotes2.8.2.html
| priority | new v logic interferes with cli arg parsing visit nowin v cli s will turn into visit nowin cli s forceversion the fact that forceversion it is put on the end of the command line undermines argument parsing especially for standard python module setup py scripts redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject new v logic interferes with cli arg parsing assigned to eric brugger category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood occasional severity major irritation found in version impact expected use os all support group any description visit nowin v cli s will turn into visit nowin cli s forceversion the fact that forceversion it is put on the end of the command line undermines argument parsing especially for standard python module setup py scripts comments i committed revisions and to the rc and trunk with thefollowing changes i modified the frontendlauncher so that arguments that it adds to the argument list are added at the beginning of the list instead of the end of the list so that argument passing for scripts works properly this resolves m bin frontendlauncher pym resources help en us html | 1 |
255,683 | 8,126,138,347 | IssuesEvent | 2018-08-17 00:15:25 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Move Radial Resample Operator into Geometry category?? | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal | Should the Radial Resample operator be moved to the Geometry category???
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2408
Status: Rejected
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Move Radial Resample Operator into Geometry category??
Assigned to:
Category:
Target version:
Author: Kevin Griffin
Start: 10/06/2015
Due date:
% Done: 0
Estimated time:
Created: 10/06/2015 12:40 pm
Updated: 10/08/2015 01:20 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Should the Radial Resample operator be moved to the Geometry category???
Comments:
| 1.0 | Move Radial Resample Operator into Geometry category?? - Should the Radial Resample operator be moved to the Geometry category???
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2408
Status: Rejected
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Move Radial Resample Operator into Geometry category??
Assigned to:
Category:
Target version:
Author: Kevin Griffin
Start: 10/06/2015
Due date:
% Done: 0
Estimated time:
Created: 10/06/2015 12:40 pm
Updated: 10/08/2015 01:20 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Should the Radial Resample operator be moved to the Geometry category???
Comments:
| priority | move radial resample operator into geometry category should the radial resample operator be moved to the geometry category redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status rejected project visit tracker feature priority normal subject move radial resample operator into geometry category assigned to category target version author kevin griffin start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group any description should the radial resample operator be moved to the geometry category comments | 1 |
468,019 | 13,460,074,401 | IssuesEvent | 2020-09-09 13:09:46 | silentium-labs/merlin-gql | https://api.github.com/repos/silentium-labs/merlin-gql | closed | GqlContext should allow for multiple roles | Priority: Medium Status: Pending Type: Enhancement | Currently the role property on the user field of the GqlContext only allows for one role, we should change that to multiple roles to give more flexibility | 1.0 | GqlContext should allow for multiple roles - Currently the role property on the user field of the GqlContext only allows for one role, we should change that to multiple roles to give more flexibility | priority | gqlcontext should allow for multiple roles currently the role property on the user field of the gqlcontext only allows for one role we should change that to multiple roles to give more flexibility | 1 |
651,756 | 21,509,480,036 | IssuesEvent | 2022-04-28 01:45:48 | Flutter-Vision/FlutterVision | https://api.github.com/repos/Flutter-Vision/FlutterVision | closed | RadioButton clickable area is really small | Bug Front-end Medium Priority | FileName??

Test if the "Group Name" properties appear in the radio button components and create multiple radio buttons with the same group name. | 1.0 | RadioButton clickable area is really small - FileName??

Test if the "Group Name" properties appear in the radio button components and create multiple radio buttons with the same group name. | priority | radiobutton clickable area is really small filename test if the group name properties appear in the radio button components and create multiple radio buttons with the same group name | 1 |
708,896 | 24,359,954,481 | IssuesEvent | 2022-10-03 10:47:47 | CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | https://api.github.com/repos/CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | closed | [Collaboration UI] Display Question Test Case | Module/Front-End Status/Medium-Priority Type/Feature | ## Description
The UI should display the question, including the input, output and an explanation.
## Parent Task
- #104 | 1.0 | [Collaboration UI] Display Question Test Case - ## Description
The UI should display the question, including the input, output and an explanation.
## Parent Task
- #104 | priority | display question test case description the ui should display the question including the input output and an explanation parent task | 1 |
196,031 | 6,923,534,570 | IssuesEvent | 2017-11-30 09:24:43 | remkos/rads | https://api.github.com/repos/remkos/rads | closed | Add DAC-ERA | data Priority-Medium | http://www.aviso.altimetry.fr/en/data/products/auxiliary-products/atmospheric-corrections/description-atmospheric-corrections.html
Within the Climate Change Initiative (CCI) project (ESA), a specific DAC-ERA has been computed for climate applications using the ECMWF ERA-INTERIM reanalysis on the 1991-2014 period. This DAC-ERA is significantly improved for the first years of altimetry (Carrère et al.,"Major improvement of altimetry sea level estimations using pressure derived corrections based on ERA-interim atmospheric reanalysis", to be submitted to Ocean Science)
| 1.0 | Add DAC-ERA - http://www.aviso.altimetry.fr/en/data/products/auxiliary-products/atmospheric-corrections/description-atmospheric-corrections.html
Within the Climate Change Initiative (CCI) project (ESA), a specific DAC-ERA has been computed for climate applications using the ECMWF ERA-INTERIM reanalysis on the 1991-2014 period. This DAC-ERA is significantly improved for the first years of altimetry (Carrère et al.,"Major improvement of altimetry sea level estimations using pressure derived corrections based on ERA-interim atmospheric reanalysis", to be submitted to Ocean Science)
| priority | add dac era within the climate change initiative cci project esa a specific dac era has been computed for climate applications using the ecmwf era interim reanalysis on the period this dac era is significantly improved for the first years of altimetry carrère et al major improvement of altimetry sea level estimations using pressure derived corrections based on era interim atmospheric reanalysis to be submitted to ocean science | 1 |
309,291 | 9,466,426,768 | IssuesEvent | 2019-04-18 04:28:33 | minio/minio | https://api.github.com/repos/minio/minio | closed | GCS: Return correct error message when wrong location is used in makeBucket | priority: medium | <!--- Provide a general summary of the issue in the Title above -->
if `us-west-1` is used for location, then the error sent to the client is `InvalidBucketName`, though the failure is because of invalid location. Correct error should be sent to the client.
## Expected Behavior
makeBucket should fail with Invalid Region error
## Current Behavior
makeBucket fails with InvalidBucketName
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
```
from minio import Minio
from minio.error import ResponseError
client = Minio('192.168.1.70:9000',
access_key='minio',
secret_key='minio123',
secure=False)
# Make a new bucket
try:
client.make_bucket('bucket-name', 'us-east-2')
except ResponseError as err:
print(err)
```
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
No
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
* Environment name and version (e.g. nginx 1.9.1):
* Server type and version:
* Operating System and version (`uname -a`):
* Link to your project: | 1.0 | GCS: Return correct error message when wrong location is used in makeBucket - <!--- Provide a general summary of the issue in the Title above -->
if `us-west-1` is used for location, then the error sent to the client is `InvalidBucketName`, though the failure is because of invalid location. Correct error should be sent to the client.
## Expected Behavior
makeBucket should fail with Invalid Region error
## Current Behavior
makeBucket fails with InvalidBucketName
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
```
from minio import Minio
from minio.error import ResponseError
client = Minio('192.168.1.70:9000',
access_key='minio',
secret_key='minio123',
secure=False)
# Make a new bucket
try:
client.make_bucket('bucket-name', 'us-east-2')
except ResponseError as err:
print(err)
```
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
No
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
* Environment name and version (e.g. nginx 1.9.1):
* Server type and version:
* Operating System and version (`uname -a`):
* Link to your project: | priority | gcs return correct error message when wrong location is used in makebucket if us west is used for location then the error sent to the client is invalidbucketname though the failure is because of invalid location correct error should be sent to the client expected behavior makebucket should fail with invalid region error current behavior makebucket fails with invalidbucketname possible solution steps to reproduce for bugs from minio import minio from minio error import responseerror client minio access key minio secret key secure false make a new bucket try client make bucket bucket name us east except responseerror as err print err context regression no your environment version used minio version environment name and version e g nginx server type and version operating system and version uname a link to your project | 1 |
50,819 | 3,007,001,604 | IssuesEvent | 2015-07-27 14:03:02 | Ombridride/minetest-minetestforfun-server | https://api.github.com/repos/Ombridride/minetest-minetestforfun-server | closed | Boat usebug | Modding@BugFix Priority@Medium | Many players use the boat to attack monster safely, we need to fix this ! (or reduce its easy utilisation...)
- Forbidden the left click when in a boat ?
- Forbidden put a boat in an another location than the water_source and river_source ?
- any other ideas ? | 1.0 | Boat usebug - Many players use the boat to attack monster safely, we need to fix this ! (or reduce its easy utilisation...)
- Forbidden the left click when in a boat ?
- Forbidden put a boat in an another location than the water_source and river_source ?
- any other ideas ? | priority | boat usebug many players use the boat to attack monster safely we need to fix this or reduce its easy utilisation forbidden the left click when in a boat forbidden put a boat in an another location than the water source and river source any other ideas | 1 |
716,793 | 24,648,504,550 | IssuesEvent | 2022-10-17 16:35:46 | PrefectHQ/prefect | https://api.github.com/repos/PrefectHQ/prefect | closed | The Value of Block is Not Updated After Clicking Save | bug ui priority:medium | ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
In the Docker Container block, after deleting the value for Networks, the value didn’t change after I clicked Save. I’m using Prefect Cloud.

I saw the same issue when changing the values of other blocks such as AWS
### Reproduction
```python3
- Add a value to the Networks section in the Docker Container block
- Click Save
- Update the value of the Networks section
- Click Save
```
### Error
_No response_
### Versions
```Text
Version: 2.4.4
API version: 0.8.1
Python version: 3.9.12
Git commit: cd649212
Built: Thu, Sep 29, 2022 2:01 PM
OS/Arch: darwin/arm64
Profile: cloud
Server type: cloud
```
### Additional context
_No response_ | 1.0 | The Value of Block is Not Updated After Clicking Save - ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
In the Docker Container block, after deleting the value for Networks, the value didn’t change after I clicked Save. I’m using Prefect Cloud.

I saw the same issue when changing the values of other blocks such as AWS
### Reproduction
```python3
- Add a value to the Networks section in the Docker Container block
- Click Save
- Update the value of the Networks section
- Click Save
```
### Error
_No response_
### Versions
```Text
Version: 2.4.4
API version: 0.8.1
Python version: 3.9.12
Git commit: cd649212
Built: Thu, Sep 29, 2022 2:01 PM
OS/Arch: darwin/arm64
Profile: cloud
Server type: cloud
```
### Additional context
_No response_ | priority | the value of block is not updated after clicking save first check i added a descriptive title to this issue i used the github search to find a similar issue and didn t find it i searched the prefect documentation for this issue i checked that this issue is related to prefect and not one of its dependencies bug summary in the docker container block after deleting the value for networks the value didn’t change after i clicked save i’m using prefect cloud i saw the same issue when changing the values of other blocks such as aws reproduction add a value to the networks section in the docker container block click save update the value of the networks section click save error no response versions text version api version python version git commit built thu sep pm os arch darwin profile cloud server type cloud additional context no response | 1 |
221,563 | 7,389,797,781 | IssuesEvent | 2018-03-16 10:00:35 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Address fields (auto generated) not displaying help | Fix Proposed Medium Priority Resolved: Next Release bug | #### Issue
You set a help text into help for address fields and this help is not displayed when you go into edit view.
#### Expected Behavior
The help texts you wrote should be displayed
#### Actual Behavior
No help is displayed
#### Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
#### Steps to Reproduce
1. Go into Studio / Accounts / Fields
2. Edit Billing Street and Billing city. you set a help text into help for each field
3. Save and go to Accounts
4. Click new account
5. Place your mouse over these two fields.
#### Context
I wanted to show addresses format using this help
Medium priority
#### Your Environment
* SuiteCRM Version used: 7.9.2
* Chrome Version 59.0.3071.115 (64-bit)):
* MySQL, PHP 7
* Ubuntu 14.04.5
| 1.0 | Address fields (auto generated) not displaying help - #### Issue
You set a help text into help for address fields and this help is not displayed when you go into edit view.
#### Expected Behavior
The help texts you wrote should be displayed
#### Actual Behavior
No help is displayed
#### Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
#### Steps to Reproduce
1. Go into Studio / Accounts / Fields
2. Edit Billing Street and Billing city. you set a help text into help for each field
3. Save and go to Accounts
4. Click new account
5. Place your mouse over these two fields.
#### Context
I wanted to show addresses format using this help
Medium priority
#### Your Environment
* SuiteCRM Version used: 7.9.2
* Chrome Version 59.0.3071.115 (64-bit)):
* MySQL, PHP 7
* Ubuntu 14.04.5
| priority | address fields auto generated not displaying help issue you set a help text into help for address fields and this help is not displayed when you go into edit view expected behavior the help texts you wrote should be displayed actual behavior no help is displayed possible fix steps to reproduce go into studio accounts fields edit billing street and billing city you set a help text into help for each field save and go to accounts click new account place your mouse over these two fields context i wanted to show addresses format using this help medium priority your environment suitecrm version used chrome version bit mysql php ubuntu | 1 |
253,692 | 8,059,243,432 | IssuesEvent | 2018-08-02 21:13:14 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [deployer] Switch to a threadpool model instead of thread per target | enhancement priority: medium | Today we start a thread per deployment target. Given that the thread can get stuck as it connects over SSH and never recovers. A target may get stuck. Additionally, this means a pretty heavy use of threads for a system with a large number of threads and a large number of Engines.
We should switch to:
* Thread pool with configurable threads (up and max) (think Tomcat)
* Threads compete for work and can "lock" a target as they operate on it
* There may be more targets than threads, and threads will continue to work against the queue util the work is completed for that period
* If a piece of work is scheduled before the prior invocation is completed, the new work is ignored (drop a period) -- not doing that can result in a backlog we cannot recover from
* A watchdog will monitor the threads, a thread is deemed stuck if it's working on the same work order for a configurable amount of time, in which case the watchdog will terminate the thread and the work order is queued for another worker thread to pick up
* A mechanism to stagger work orders may also be required to avoid DOSing the `published` repo from a large number of deployers (while this can be addressed by a content depot (or several)), it's nice to have a configurable mechanism to stagger requests if required
Ping me to discuss the details. | 1.0 | [deployer] Switch to a threadpool model instead of thread per target - Today we start a thread per deployment target. Given that the thread can get stuck as it connects over SSH and never recovers. A target may get stuck. Additionally, this means a pretty heavy use of threads for a system with a large number of threads and a large number of Engines.
We should switch to:
* Thread pool with configurable threads (up and max) (think Tomcat)
* Threads compete for work and can "lock" a target as they operate on it
* There may be more targets than threads, and threads will continue to work against the queue util the work is completed for that period
* If a piece of work is scheduled before the prior invocation is completed, the new work is ignored (drop a period) -- not doing that can result in a backlog we cannot recover from
* A watchdog will monitor the threads, a thread is deemed stuck if it's working on the same work order for a configurable amount of time, in which case the watchdog will terminate the thread and the work order is queued for another worker thread to pick up
* A mechanism to stagger work orders may also be required to avoid DOSing the `published` repo from a large number of deployers (while this can be addressed by a content depot (or several)), it's nice to have a configurable mechanism to stagger requests if required
Ping me to discuss the details. | priority | switch to a threadpool model instead of thread per target today we start a thread per deployment target given that the thread can get stuck as it connects over ssh and never recovers a target may get stuck additionally this means a pretty heavy use of threads for a system with a large number of threads and a large number of engines we should switch to thread pool with configurable threads up and max think tomcat threads compete for work and can lock a target as they operate on it there may be more targets than threads and threads will continue to work against the queue util the work is completed for that period if a piece of work is scheduled before the prior invocation is completed the new work is ignored drop a period not doing that can result in a backlog we cannot recover from a watchdog will monitor the threads a thread is deemed stuck if it s working on the same work order for a configurable amount of time in which case the watchdog will terminate the thread and the work order is queued for another worker thread to pick up a mechanism to stagger work orders may also be required to avoid dosing the published repo from a large number of deployers while this can be addressed by a content depot or several it s nice to have a configurable mechanism to stagger requests if required ping me to discuss the details | 1 |
359,413 | 10,675,887,656 | IssuesEvent | 2019-10-21 12:42:21 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | opened | Create the Vanilla version of the Dotcom Shell | dev package: vanilla priority: medium | #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
Developer
> I need to:
utilize a vanilla javascript version of the DotcomShell from the IBM.com library
> so that I can:
integrate the ibm.com page shell with masthead and footer in my application
#### Additional information
- should utilize the `styles` package for styling of the component
#### Acceptance criteria
- [ ] Include storybook html story
- [ ] minimum 80% unit test coverage
_Original issue: https://github.ibm.com/webstandards/digital-design/issues/1462_ | 1.0 | Create the Vanilla version of the Dotcom Shell - #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
Developer
> I need to:
utilize a vanilla javascript version of the DotcomShell from the IBM.com library
> so that I can:
integrate the ibm.com page shell with masthead and footer in my application
#### Additional information
- should utilize the `styles` package for styling of the component
#### Acceptance criteria
- [ ] Include storybook html story
- [ ] minimum 80% unit test coverage
_Original issue: https://github.ibm.com/webstandards/digital-design/issues/1462_ | priority | create the vanilla version of the dotcom shell user story as a developer i need to utilize a vanilla javascript version of the dotcomshell from the ibm com library so that i can integrate the ibm com page shell with masthead and footer in my application additional information should utilize the styles package for styling of the component acceptance criteria include storybook html story minimum unit test coverage original issue | 1 |
285,586 | 8,766,930,173 | IssuesEvent | 2018-12-17 18:12:22 | GrottoCenter/Grottocenter3 | https://api.github.com/repos/GrottoCenter/Grottocenter3 | closed | Display issues | Priority: Medium Type: Bug | - public entries count on homepage
- position or donate button
- hide option menu button on homepage header (on the right of the header) | 1.0 | Display issues - - public entries count on homepage
- position or donate button
- hide option menu button on homepage header (on the right of the header) | priority | display issues public entries count on homepage position or donate button hide option menu button on homepage header on the right of the header | 1 |
359,003 | 10,652,894,933 | IssuesEvent | 2019-10-17 13:29:55 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | web admin: bad handle of "unit" fields | Priority: Medium Type: Bug | When you want to modify unit of a field on web admin, it's easy to set a wrong value.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to System configuration -> Main configuration -> Advanced
2. On "API token max expiration" field, click in the dropdown unit:
=> It displays all units (seconds, minutes, hours, etc.) and its value becomes "h" in grey.
3. Click on "hours" in the dropdown
=> Value is set to "h" in grey
4. Save
=> API call send a "null" value in JSON payload for `unit` attribute
=> `api_max_expiration` is defined like this in `pf.conf`:
```
api_max_expiration=
```
In this situation, `api-frontend` service crashes due to a wrong value for this setting.
**Expected behavior**
Proper handle of this case in web admin. | 1.0 | web admin: bad handle of "unit" fields - When you want to modify unit of a field on web admin, it's easy to set a wrong value.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to System configuration -> Main configuration -> Advanced
2. On "API token max expiration" field, click in the dropdown unit:
=> It displays all units (seconds, minutes, hours, etc.) and its value becomes "h" in grey.
3. Click on "hours" in the dropdown
=> Value is set to "h" in grey
4. Save
=> API call send a "null" value in JSON payload for `unit` attribute
=> `api_max_expiration` is defined like this in `pf.conf`:
```
api_max_expiration=
```
In this situation, `api-frontend` service crashes due to a wrong value for this setting.
**Expected behavior**
Proper handle of this case in web admin. | priority | web admin bad handle of unit fields when you want to modify unit of a field on web admin it s easy to set a wrong value to reproduce steps to reproduce the behavior go to system configuration main configuration advanced on api token max expiration field click in the dropdown unit it displays all units seconds minutes hours etc and its value becomes h in grey click on hours in the dropdown value is set to h in grey save api call send a null value in json payload for unit attribute api max expiration is defined like this in pf conf api max expiration in this situation api frontend service crashes due to a wrong value for this setting expected behavior proper handle of this case in web admin | 1 |
686,825 | 23,505,746,240 | IssuesEvent | 2022-08-18 12:29:23 | apache/incubator-devlake | https://api.github.com/repos/apache/incubator-devlake | closed | [Doc][Metrics] Update the structure of 'Engineering Metrics' doc for better display | type/docs priority/medium | ## Documentation Scope
https://devlake.apache.org/docs/EngineeringMetrics
## Describe the Change
- [x] Change the table to small sections
- [x] Each section contains
- Metric definition/value/use case
- Queries
- Panel settings
- Screenshots
## Screenshots

| 1.0 | [Doc][Metrics] Update the structure of 'Engineering Metrics' doc for better display - ## Documentation Scope
https://devlake.apache.org/docs/EngineeringMetrics
## Describe the Change
- [x] Change the table to small sections
- [x] Each section contains
- Metric definition/value/use case
- Queries
- Panel settings
- Screenshots
## Screenshots

| priority | update the structure of engineering metrics doc for better display documentation scope describe the change change the table to small sections each section contains metric definition value use case queries panel settings screenshots screenshots | 1 |
47,881 | 2,986,852,963 | IssuesEvent | 2015-07-20 08:20:57 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | closed | Move help files into add-ons | Priority-Medium Type-Enhancement | ```
Move all of the supported help files into add-ons
The English one should be included by default, which will (slightly) reduce the download
size while making it possible to update it via the marketplace.
We should have 'standard rules' governing the 5s at which specific translated help
files can be released at alpha, beta and release levels
This should probably just be based o
If a user selects a language for which there is a help file then they should be prompted
(in gui mode) to ask if they want to download it.
```
Original issue reported on code.google.com by `psiinon` on 2014-11-03 12:52:28 | 1.0 | Move help files into add-ons - ```
Move all of the supported help files into add-ons
The English one should be included by default, which will (slightly) reduce the download
size while making it possible to update it via the marketplace.
We should have 'standard rules' governing the 5s at which specific translated help
files can be released at alpha, beta and release levels
This should probably just be based o
If a user selects a language for which there is a help file then they should be prompted
(in gui mode) to ask if they want to download it.
```
Original issue reported on code.google.com by `psiinon` on 2014-11-03 12:52:28 | priority | move help files into add ons move all of the supported help files into add ons the english one should be included by default which will slightly reduce the download size while making it possible to update it via the marketplace we should have standard rules governing the at which specific translated help files can be released at alpha beta and release levels this should probably just be based o if a user selects a language for which there is a help file then they should be prompted in gui mode to ask if they want to download it original issue reported on code google com by psiinon on | 1 |
200,102 | 6,998,107,853 | IssuesEvent | 2017-12-16 23:13:08 | Marri/glowfic | https://api.github.com/repos/Marri/glowfic | opened | Allow drag-drop into and out of board sections in boards#edit | 4. low priority 8. medium type: enhancement | Allow users to drag-drop posts between board sections (including "unsectioned") on the board editor UI. | 1.0 | Allow drag-drop into and out of board sections in boards#edit - Allow users to drag-drop posts between board sections (including "unsectioned") on the board editor UI. | priority | allow drag drop into and out of board sections in boards edit allow users to drag drop posts between board sections including unsectioned on the board editor ui | 1 |
784,077 | 27,557,057,106 | IssuesEvent | 2023-03-07 18:46:47 | eclipse/dirigible | https://api.github.com/repos/eclipse/dirigible | opened | [UI] Disable two finger horizontal swipe gesture | enhancement web-ide usability priority-high efforts-medium | **Describe the bug**
In modern browsers, when using touchpads, the two finger horizontal swipe gesture equals pressing the back button. This is brilliant for normal web pages but in web apps such as Dirigible, it's very intrusive.
**Expected behavior**
The navigational gestures should be disabled.
**Desktop:**
- OS: Fedora Linux 36
- Browser: Firefox 110
- Version: Dirigible 7.1.6 | 1.0 | [UI] Disable two finger horizontal swipe gesture - **Describe the bug**
In modern browsers, when using touchpads, the two finger horizontal swipe gesture equals pressing the back button. This is brilliant for normal web pages but in web apps such as Dirigible, it's very intrusive.
**Expected behavior**
The navigational gestures should be disabled.
**Desktop:**
- OS: Fedora Linux 36
- Browser: Firefox 110
- Version: Dirigible 7.1.6 | priority | disable two finger horizontal swipe gesture describe the bug in modern browsers when using touchpads the two finger horizontal swipe gesture equals pressing the back button this is brilliant for normal web pages but in web apps such as dirigible it s very intrusive expected behavior the navigational gestures should be disabled desktop os fedora linux browser firefox version dirigible | 1 |
538,822 | 15,778,976,479 | IssuesEvent | 2021-04-01 08:16:54 | knurling-rs/probe-run | https://api.github.com/repos/knurling-rs/probe-run | closed | backtrace can infinte-loop. | difficulty: medium priority: high status: needs info type: bug | I'm seeing this behavior with `-C force-frame-pointers=no`.
I think it's to be expected that backtracing doesn't work correctly with it, but I think at least this should be detected and fail with `error: the stack appears to be corrupted beyond this point` instead of looping forever.
If there's interest I can try cooking a binary that reproduces this.
```
stack backtrace:
0: HardFaultTrampoline
<exception entry>
1: tester_gwc::sys::__cortex_m_rt_WDT
at ak/src/bin/../sys.rs:556
2: WDT
at ak/src/bin/../sys.rs:553
<exception entry>
3: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.8/src/future/select.rs:95
4: tester_gwc::common::abort_on_keypress::{{closure}}
at ak/src/bin/../tester_common.rs:26
5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
6: tester_gwc::test_network::{{closure}}
at ak/src/bin/tester_gwc.rs:61
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
8: tester_gwc::main::{{closure}}
at ak/src/bin/tester_gwc.rs:43
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: tester_gwc::sys::main_task::task::{{closure}}
at ak/src/bin/../sys.rs:196
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
12: embassy::executor::Task<F>::poll
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:132
13: core::cell::Cell<T>::get
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/cell.rs:432
14: embassy::executor::timer_queue::TimerQueue::update
at /home/dirbaio/akiles/embassy/embassy/src/executor/timer_queue.rs:34
15: embassy::executor::Executor::run::{{closure}}
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:241
16: embassy::executor::run_queue::RunQueue::dequeue_all
at /home/dirbaio/akiles/embassy/embassy/src/executor/run_queue.rs:65
17: embassy::executor::Executor::run
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:223
18: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
19: real_main
at ak/src/bin/../sys.rs:478
20: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
21: real_main
at ak/src/bin/../sys.rs:478
22: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
23: real_main
at ak/src/bin/../sys.rs:478
24: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
25: real_main
at ak/src/bin/../sys.rs:478
26: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
27: real_main
at ak/src/bin/../sys.rs:478
28: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
29: real_main
at ak/src/bin/../sys.rs:478
30: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
31: real_main
at ak/src/bin/../sys.rs:478
32: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
33: real_main
at ak/src/bin/../sys.rs:478
34: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
35: real_main
at ak/src/bin/../sys.rs:478
36: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
37: real_main
at ak/src/bin/../sys.rs:478
38: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
39: real_main
at ak/src/bin/../sys.rs:478
40: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
41: real_main
at ak/src/bin/../sys.rs:478
42: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
43: real_main
at ak/src/bin/../sys.rs:478
44: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
45: real_main
at ak/src/bin/../sys.rs:478
46: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
47: real_main
at ak/src/bin/../sys.rs:478
... this goes on forever
``` | 1.0 | backtrace can infinte-loop. - I'm seeing this behavior with `-C force-frame-pointers=no`.
I think it's to be expected that backtracing doesn't work correctly with it, but I think at least this should be detected and fail with `error: the stack appears to be corrupted beyond this point` instead of looping forever.
If there's interest I can try cooking a binary that reproduces this.
```
stack backtrace:
0: HardFaultTrampoline
<exception entry>
1: tester_gwc::sys::__cortex_m_rt_WDT
at ak/src/bin/../sys.rs:556
2: WDT
at ak/src/bin/../sys.rs:553
<exception entry>
3: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.8/src/future/select.rs:95
4: tester_gwc::common::abort_on_keypress::{{closure}}
at ak/src/bin/../tester_common.rs:26
5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
6: tester_gwc::test_network::{{closure}}
at ak/src/bin/tester_gwc.rs:61
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
8: tester_gwc::main::{{closure}}
at ak/src/bin/tester_gwc.rs:43
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: tester_gwc::sys::main_task::task::{{closure}}
at ak/src/bin/../sys.rs:196
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
12: embassy::executor::Task<F>::poll
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:132
13: core::cell::Cell<T>::get
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/cell.rs:432
14: embassy::executor::timer_queue::TimerQueue::update
at /home/dirbaio/akiles/embassy/embassy/src/executor/timer_queue.rs:34
15: embassy::executor::Executor::run::{{closure}}
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:241
16: embassy::executor::run_queue::RunQueue::dequeue_all
at /home/dirbaio/akiles/embassy/embassy/src/executor/run_queue.rs:65
17: embassy::executor::Executor::run
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:223
18: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
19: real_main
at ak/src/bin/../sys.rs:478
20: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
21: real_main
at ak/src/bin/../sys.rs:478
22: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
23: real_main
at ak/src/bin/../sys.rs:478
24: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
25: real_main
at ak/src/bin/../sys.rs:478
26: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
27: real_main
at ak/src/bin/../sys.rs:478
28: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
29: real_main
at ak/src/bin/../sys.rs:478
30: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
31: real_main
at ak/src/bin/../sys.rs:478
32: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
33: real_main
at ak/src/bin/../sys.rs:478
34: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
35: real_main
at ak/src/bin/../sys.rs:478
36: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
37: real_main
at ak/src/bin/../sys.rs:478
38: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
39: real_main
at ak/src/bin/../sys.rs:478
40: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
41: real_main
at ak/src/bin/../sys.rs:478
42: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
43: real_main
at ak/src/bin/../sys.rs:478
44: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
45: real_main
at ak/src/bin/../sys.rs:478
46: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
47: real_main
at ak/src/bin/../sys.rs:478
... this goes on forever
``` | priority | backtrace can infinte loop i m seeing this behavior with c force frame pointers no i think it s to be expected that backtracing doesn t work correctly with it but i think at least this should be detected and fail with error the stack appears to be corrupted beyond this point instead of looping forever if there s interest i can try cooking a binary that reproduces this stack backtrace hardfaulttrampoline tester gwc sys cortex m rt wdt at ak src bin sys rs wdt at ak src bin sys rs as core future future future poll at home dirbaio cargo registry src github com futures util src future select rs tester gwc common abort on keypress closure at ak src bin tester common rs as core future future future poll at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src future mod rs tester gwc test network closure at ak src bin tester gwc rs as core future future future poll at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src future mod rs tester gwc main closure at ak src bin tester gwc rs as core future future future poll tester gwc sys main task task closure at ak src bin sys rs as core future future future poll at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src future mod rs embassy executor task poll at home dirbaio akiles embassy embassy src executor mod rs core cell cell get at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src cell rs embassy executor timer queue timerqueue update at home dirbaio akiles embassy embassy src executor timer queue rs embassy executor executor run closure at home dirbaio akiles embassy embassy src executor mod rs embassy executor run queue runqueue dequeue all at home dirbaio akiles embassy embassy src executor run queue rs embassy executor executor run at home dirbaio akiles embassy embassy src executor mod rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs this goes on forever | 1 |
252,759 | 8,041,443,460 | IssuesEvent | 2018-07-31 02:59:26 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Mapping OutputDataset to TaskName | Medium Priority | in case of step/task chain, what is the best way to get
{
"Task1": ["output1"],
"Task2":[].
...
}
type of dictionnary ? | 1.0 | Mapping OutputDataset to TaskName - in case of step/task chain, what is the best way to get
{
"Task1": ["output1"],
"Task2":[].
...
}
type of dictionnary ? | priority | mapping outputdataset to taskname in case of step task chain what is the best way to get type of dictionnary | 1 |
401,484 | 11,790,772,483 | IssuesEvent | 2020-03-17 19:38:25 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | MultibodyPlant::DoCalcTimeDerivatives() throws (i.e., running in continuous mode) | priority: medium team: dynamics type: bug | An error message is output that the articulated body inertia is not valid. No existing inertial or kinematic parameter checks have triggered any problems before the change to the O(n) algorithm.
The issue appears to be that the tolerances are set to tight, as this comment indicates:
https://github.com/RobotLocomotion/drake/blob/161b467072cb6df305b58e19b341cd6403d70c79/multibody/tree/articulated_body_inertia.h#L170
Changing the tolerance, just slightly, from -1e-14 to -7.5e-13 keeps the exception from being thrown. We can provide the SDF file we are using if it would be helpful for debugging (my initial thought is that it would not be).
Marking this medium priority because we are able to get around this by disabling that check but others are likely to hit this landmine, particularly when the ABA becomes the default. Consider increasing to high priority if it is not fixed before the ABA algorithm becomes the default.
| 1.0 | MultibodyPlant::DoCalcTimeDerivatives() throws (i.e., running in continuous mode) - An error message is output that the articulated body inertia is not valid. No existing inertial or kinematic parameter checks have triggered any problems before the change to the O(n) algorithm.
The issue appears to be that the tolerances are set to tight, as this comment indicates:
https://github.com/RobotLocomotion/drake/blob/161b467072cb6df305b58e19b341cd6403d70c79/multibody/tree/articulated_body_inertia.h#L170
Changing the tolerance, just slightly, from -1e-14 to -7.5e-13 keeps the exception from being thrown. We can provide the SDF file we are using if it would be helpful for debugging (my initial thought is that it would not be).
Marking this medium priority because we are able to get around this by disabling that check but others are likely to hit this landmine, particularly when the ABA becomes the default. Consider increasing to high priority if it is not fixed before the ABA algorithm becomes the default.
| priority | multibodyplant docalctimederivatives throws i e running in continuous mode an error message is output that the articulated body inertia is not valid no existing inertial or kinematic parameter checks have triggered any problems before the change to the o n algorithm the issue appears to be that the tolerances are set to tight as this comment indicates changing the tolerance just slightly from to keeps the exception from being thrown we can provide the sdf file we are using if it would be helpful for debugging my initial thought is that it would not be marking this medium priority because we are able to get around this by disabling that check but others are likely to hit this landmine particularly when the aba becomes the default consider increasing to high priority if it is not fixed before the aba algorithm becomes the default | 1 |
61,272 | 3,143,488,950 | IssuesEvent | 2015-09-14 07:24:04 | SiCKRAGETV/sickrage-issues | https://api.github.com/repos/SiCKRAGETV/sickrage-issues | closed | [APP SUBMITTED]: 'NoneType' object has no attribute 'mount' | 1: Bug / issue 2: Medium Priority | ### INFO
Python Version: **2.7.9 (default, Apr 2 2015, 15:34:55) [GCC 4.9.2]**
Operating System: **Linux-3.19.0-26-generic-i686-with-Ubuntu-15.04-vivid**
Locale: UTF-8
Branch: **develop**
Commit: SiCKRAGETV/SickRage@f15adac877b6f4c7f434f734ecf6824e87675ddc
Link to Log: https://gist.github.com/e3f169680dace8d48a89
### ERROR
```
SEARCHQUEUE-MANUAL-75340 :: [AlphaRatio] :: Unable to connect to AlphaRatio provider.
```
---
_STAFF NOTIFIED_: @SiCKRAGETV/owners @SiCKRAGETV/moderators | 1.0 | [APP SUBMITTED]: 'NoneType' object has no attribute 'mount' - ### INFO
Python Version: **2.7.9 (default, Apr 2 2015, 15:34:55) [GCC 4.9.2]**
Operating System: **Linux-3.19.0-26-generic-i686-with-Ubuntu-15.04-vivid**
Locale: UTF-8
Branch: **develop**
Commit: SiCKRAGETV/SickRage@f15adac877b6f4c7f434f734ecf6824e87675ddc
Link to Log: https://gist.github.com/e3f169680dace8d48a89
### ERROR
```
SEARCHQUEUE-MANUAL-75340 :: [AlphaRatio] :: Unable to connect to AlphaRatio provider.
```
---
_STAFF NOTIFIED_: @SiCKRAGETV/owners @SiCKRAGETV/moderators | priority | nonetype object has no attribute mount info python version default apr operating system linux generic with ubuntu vivid locale utf branch develop commit sickragetv sickrage link to log error searchqueue manual unable to connect to alpharatio provider staff notified sickragetv owners sickragetv moderators | 1 |
26,106 | 2,684,174,740 | IssuesEvent | 2015-03-28 18:37:04 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | abd.exe (Android Debug Bridge) shell history bug | 2–5 stars bug imported Priority-Medium | _From [LazyRoy@gmail.com](https://code.google.com/u/LazyRoy@gmail.com/) on September 10, 2012 06:31:08_
Required information! OS version: WinXP SP3 x86 ConEmu version: 120904
Far version (if you are using Far Manager): 1.75 build 2619 x86
Если "abd shell" выполняется из cmd файла, то в нем перестает работать история команд - по стрелкам.
Если запускается напрямую abd.exe - то работает.
Если запускается cmd из консоли без ConEmu - то работает.
Если запускается cmd из Far без ConEmu - то работает.
Если запускается cmd без Far в ConEmu - то работает.
Видимо какой-то нюанс совместимости "запускатора" Far'а для cmd файлов и ConEmu . Подозреваю, что он чреват не только этим багом.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=700_ | 1.0 | abd.exe (Android Debug Bridge) shell history bug - _From [LazyRoy@gmail.com](https://code.google.com/u/LazyRoy@gmail.com/) on September 10, 2012 06:31:08_
Required information! OS version: WinXP SP3 x86 ConEmu version: 120904
Far version (if you are using Far Manager): 1.75 build 2619 x86
Если "abd shell" выполняется из cmd файла, то в нем перестает работать история команд - по стрелкам.
Если запускается напрямую abd.exe - то работает.
Если запускается cmd из консоли без ConEmu - то работает.
Если запускается cmd из Far без ConEmu - то работает.
Если запускается cmd без Far в ConEmu - то работает.
Видимо какой-то нюанс совместимости "запускатора" Far'а для cmd файлов и ConEmu . Подозреваю, что он чреват не только этим багом.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=700_ | priority | abd exe android debug bridge shell history bug from on september required information os version winxp conemu version far version if you are using far manager build если abd shell выполняется из cmd файла то в нем перестает работать история команд по стрелкам если запускается напрямую abd exe то работает если запускается cmd из консоли без conemu то работает если запускается cmd из far без conemu то работает если запускается cmd без far в conemu то работает видимо какой то нюанс совместимости запускатора far а для cmd файлов и conemu подозреваю что он чреват не только этим багом original issue | 1 |
479,067 | 13,790,882,120 | IssuesEvent | 2020-10-09 11:11:12 | Kreateer/automatic-file-sorter | https://api.github.com/repos/Kreateer/automatic-file-sorter | opened | Allow user to choose whether or not to continously move/copy files from src to dst | GUI difficulty: medium enhancement good first issue hacktoberfest optional-feature priority: low | # Goal
- Allow user to choose whether or not to continously move/copy files from chosen source to destination as they are added.
# Details
- When the user chooses this option, the program should run on a loop and move/copy and/or sort any files that are subsequently added into the source folder.
- This issue is in tandem with issue #2
| 1.0 | Allow user to choose whether or not to continously move/copy files from src to dst - # Goal
- Allow user to choose whether or not to continously move/copy files from chosen source to destination as they are added.
# Details
- When the user chooses this option, the program should run on a loop and move/copy and/or sort any files that are subsequently added into the source folder.
- This issue is in tandem with issue #2
| priority | allow user to choose whether or not to continously move copy files from src to dst goal allow user to choose whether or not to continously move copy files from chosen source to destination as they are added details when the user chooses this option the program should run on a loop and move copy and or sort any files that are subsequently added into the source folder this issue is in tandem with issue | 1 |
391,616 | 11,576,283,724 | IssuesEvent | 2020-02-21 11:34:45 | bounswe/bounswe2020group4 | https://api.github.com/repos/bounswe/bounswe2020group4 | opened | Licensing | Effort: Medium Priority: Low Status: Pending Type: Research | The license of the project needs to be decided.
Some of the popular licenses by comparison:
https://choosealicense.com/licenses/
How to add license to project:
https://stackoverflow.com/a/31666878/10091826 | 1.0 | Licensing - The license of the project needs to be decided.
Some of the popular licenses by comparison:
https://choosealicense.com/licenses/
How to add license to project:
https://stackoverflow.com/a/31666878/10091826 | priority | licensing the license of the project needs to be decided some of the popular licenses by comparison how to add license to project | 1 |
444,679 | 12,819,177,609 | IssuesEvent | 2020-07-06 01:01:16 | OpenMined/PyGridNetwork | https://api.github.com/repos/OpenMined/PyGridNetwork | closed | Add parameters parser APP | Good first issue :mortar_board: Priority: 3 - Medium :unamused: Severity: 3 - Medium :unamused: Type: Improvement :chart_with_upwards_trend: | ## Description
At this moment when an user set up the server, some parameters are hardcoded https://github.com/OpenMined/PyGridNetwork/blob/0a9e6197c3f4436887b3312b33bc9709f46f9873/gridnetwork/__init__.py#L78
A parameter parser `argparser` similar to what we had in PyGrid repository would help
## Are you interested in working on this improvement yourself?
- Yes, I am.
| 1.0 | Add parameters parser APP - ## Description
At this moment when an user set up the server, some parameters are hardcoded https://github.com/OpenMined/PyGridNetwork/blob/0a9e6197c3f4436887b3312b33bc9709f46f9873/gridnetwork/__init__.py#L78
A parameter parser `argparser` similar to what we had in PyGrid repository would help
## Are you interested in working on this improvement yourself?
- Yes, I am.
| priority | add parameters parser app description at this moment when an user set up the server some parameters are hardcoded a parameter parser argparser similar to what we had in pygrid repository would help are you interested in working on this improvement yourself yes i am | 1 |
526,188 | 15,283,145,899 | IssuesEvent | 2021-02-23 10:28:40 | pupil-labs/pupil | https://api.github.com/repos/pupil-labs/pupil | closed | Test msgpack 0.6.0 compatibility | dependency: msgpack priority: medium | We use [check_code](https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L23)
```
assert (
msgpack.version[1] == 5
), "msgpack out of date, please upgrade to version (0, 5, 6 ) or later."
```
which doesn't work when msgpack version is 0.6.0
we can use:
```
from distutils.version import LooseVersion, StrictVersion
assert StrictVersion(msgpack.version) >= StrictVersion("0.5.6")
```
instead | 1.0 | Test msgpack 0.6.0 compatibility - We use [check_code](https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L23)
```
assert (
msgpack.version[1] == 5
), "msgpack out of date, please upgrade to version (0, 5, 6 ) or later."
```
which doesn't work when msgpack version is 0.6.0
we can use:
```
from distutils.version import LooseVersion, StrictVersion
assert StrictVersion(msgpack.version) >= StrictVersion("0.5.6")
```
instead | priority | test msgpack compatibility we use assert msgpack version msgpack out of date please upgrade to version or later which doesn t work when msgpack version is we can use from distutils version import looseversion strictversion assert strictversion msgpack version strictversion instead | 1 |
96,883 | 3,974,634,418 | IssuesEvent | 2016-05-04 23:06:44 | haganbt/pepp | https://api.github.com/repos/haganbt/pepp | closed | Tableau Exports - add ability to overload tableau copy for easy regeneration of custom workbooks | enhancement PRIORITY: Medium | It is typical to want to refresh a custom Tableau workbook repeatedly. To support this, we could overload the workbook export function by simply checking if a workbook with the same name as the config file exists. This was, any user can move a tableau workbook in to the source dir, and when the recipe is run, the custom workbook would be copied out to the destination folder.
- [ ] Create root level dir for tableau templates
- [ ] Alter logic to always check for a workbook named the same as the config file - if exists rewrite paths etc and copy to destination | 1.0 | Tableau Exports - add ability to overload tableau copy for easy regeneration of custom workbooks - It is typical to want to refresh a custom Tableau workbook repeatedly. To support this, we could overload the workbook export function by simply checking if a workbook with the same name as the config file exists. This was, any user can move a tableau workbook in to the source dir, and when the recipe is run, the custom workbook would be copied out to the destination folder.
- [ ] Create root level dir for tableau templates
- [ ] Alter logic to always check for a workbook named the same as the config file - if exists rewrite paths etc and copy to destination | priority | tableau exports add ability to overload tableau copy for easy regeneration of custom workbooks it is typical to want to refresh a custom tableau workbook repeatedly to support this we could overload the workbook export function by simply checking if a workbook with the same name as the config file exists this was any user can move a tableau workbook in to the source dir and when the recipe is run the custom workbook would be copied out to the destination folder create root level dir for tableau templates alter logic to always check for a workbook named the same as the config file if exists rewrite paths etc and copy to destination | 1 |
484,418 | 13,939,585,218 | IssuesEvent | 2020-10-22 16:41:22 | interferences-at/mpop | https://api.github.com/repos/interferences-at/mpop | closed | Update all the English text for each multiple-question pages | QML difficulty: medium kiosk_central mpop_kiosk priority: high | The text are detailed in #84
There is QML model for the questions.
Ask @aalex in case of doubt about which question has which text.
`ModelQuestions.qml`
Also update min/max text for both languages. | 1.0 | Update all the English text for each multiple-question pages - The text are detailed in #84
There is QML model for the questions.
Ask @aalex in case of doubt about which question has which text.
`ModelQuestions.qml`
Also update min/max text for both languages. | priority | update all the english text for each multiple question pages the text are detailed in there is qml model for the questions ask aalex in case of doubt about which question has which text modelquestions qml also update min max text for both languages | 1 |
311,538 | 9,534,950,457 | IssuesEvent | 2019-04-30 04:25:53 | cuappdev/ithaca-transit-ios | https://api.github.com/repos/cuappdev/ithaca-transit-ios | opened | RouteOptions: Circle gets flattened in route diagram | Priority: Medium Type: Bug | Suspect it's a constraint issue; no idea though
<img src="https://user-images.githubusercontent.com/36868927/56940753-4b4afb80-6ade-11e9-9f44-e3019a44affb.png" width="300"> | 1.0 | RouteOptions: Circle gets flattened in route diagram - Suspect it's a constraint issue; no idea though
<img src="https://user-images.githubusercontent.com/36868927/56940753-4b4afb80-6ade-11e9-9f44-e3019a44affb.png" width="300"> | priority | routeoptions circle gets flattened in route diagram suspect it s a constraint issue no idea though | 1 |
258,664 | 8,178,772,260 | IssuesEvent | 2018-08-28 14:38:46 | AlexsLemonade/refinebio-frontend | https://api.github.com/repos/AlexsLemonade/refinebio-frontend | closed | 'Back to Results' Button Loses Search Position | interaction medium priority review | To reproduce:
View by 50 per page.
Scroll down to 50, click an item.
Press Back to Results Page
You are now at item 1 of 50.
This is going to be infuriating for building large data-sets. | 1.0 | 'Back to Results' Button Loses Search Position - To reproduce:
View by 50 per page.
Scroll down to 50, click an item.
Press Back to Results Page
You are now at item 1 of 50.
This is going to be infuriating for building large data-sets. | priority | back to results button loses search position to reproduce view by per page scroll down to click an item press back to results page you are now at item of this is going to be infuriating for building large data sets | 1 |
632,992 | 20,241,501,544 | IssuesEvent | 2022-02-14 09:43:23 | PoProstuMieciek/wikipedia-scraper | https://api.github.com/repos/PoProstuMieciek/wikipedia-scraper | closed | feat/prepare-images-table | priority: medium type: feat scope: database | **AC**
- [ ] primary key - subpage id
- [x] image etag string - reference to object store | 1.0 | feat/prepare-images-table - **AC**
- [ ] primary key - subpage id
- [x] image etag string - reference to object store | priority | feat prepare images table ac primary key subpage id image etag string reference to object store | 1 |
351,953 | 10,525,704,140 | IssuesEvent | 2019-09-30 15:33:47 | forceworkbench/forceworkbench | https://api.github.com/repos/forceworkbench/forceworkbench | closed | Add UI support for GROUP BY in SOQL | Component-Query Priority-Medium Scheduled-Backlog enhancement imported | _Original author: ryan.bra...@gmail.com (February 06, 2010 04:45:16)_
New GROUP BY Clause
Idea light bulb You asked for it! This enhancement is an idea from the
IdeaExchange.
Spring '10 introduces a new GROUP BY clause in SOQL that is similar to
GROUP BY in SQL. You can use GROUP BY with new aggregate functions, such as
SUM() or MAX(), to summarize the data and roll up query results rather than
having to process the individual records in your code. For example, you can
use GROUP BY to determine how many leads are associated with each
LeadSource value:
```
SELECT LeadSource, COUNT(Name)
FROM Lead
GROUP BY LeadSource
If you want a query to do the work of calculating subtotals so that you
```
don't have to maintain that logic in your code, use GROUP BY ROLLUP. If you
want to calculate subtotals for every possible combination of grouped field
(to generate a cross-tabular report, for example), use GROUP BY CUBE instead.
For more information, see “GROUP BY” in the Force.com Web Services API
Developer's Guide.
_Original issue: http://code.google.com/p/forceworkbench/issues/detail?id=272_
| 1.0 | Add UI support for GROUP BY in SOQL - _Original author: ryan.bra...@gmail.com (February 06, 2010 04:45:16)_
New GROUP BY Clause
Idea light bulb You asked for it! This enhancement is an idea from the
IdeaExchange.
Spring '10 introduces a new GROUP BY clause in SOQL that is similar to
GROUP BY in SQL. You can use GROUP BY with new aggregate functions, such as
SUM() or MAX(), to summarize the data and roll up query results rather than
having to process the individual records in your code. For example, you can
use GROUP BY to determine how many leads are associated with each
LeadSource value:
```
SELECT LeadSource, COUNT(Name)
FROM Lead
GROUP BY LeadSource
If you want a query to do the work of calculating subtotals so that you
```
don't have to maintain that logic in your code, use GROUP BY ROLLUP. If you
want to calculate subtotals for every possible combination of grouped field
(to generate a cross-tabular report, for example), use GROUP BY CUBE instead.
For more information, see “GROUP BY” in the Force.com Web Services API
Developer's Guide.
_Original issue: http://code.google.com/p/forceworkbench/issues/detail?id=272_
| priority | add ui support for group by in soql original author ryan bra gmail com february new group by clause idea light bulb you asked for it this enhancement is an idea from the ideaexchange spring introduces a new group by clause in soql that is similar to group by in sql you can use group by with new aggregate functions such as sum or max to summarize the data and roll up query results rather than having to process the individual records in your code for example you can use group by to determine how many leads are associated with each leadsource value select leadsource count name from lead group by leadsource if you want a query to do the work of calculating subtotals so that you don t have to maintain that logic in your code use group by rollup if you want to calculate subtotals for every possible combination of grouped field to generate a cross tabular report for example use group by cube instead for more information see “group by” in the force com web services api developer s guide original issue | 1 |
2,282 | 2,525,001,916 | IssuesEvent | 2015-01-20 21:32:15 | graybeal/ont | https://api.github.com/repos/graybeal/ont | closed | Process graphId parameter in direct registration capability | 1 star duplicate enhancement imported ooici portal Priority-Medium | _From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on November 24, 2009 22:54:01_
What capability do you want added or improved? register an ontology such that the contents are added to a desired graph Where do you want this capability to be accessible? In the direct registration capability, issue `#214` What sort of input/command mechanism do you want? A parameter "graphId" for the POST request Other details of your desired capability? See enhancement issue `#214` I'm not setting a high priority for the moment as I think we can do tests
of the overall functionality in the semantic prototype by just using the
default graph in the ORR. Luis: please comment and adjust priority if
necessary.
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=225_ | 1.0 | Process graphId parameter in direct registration capability - _From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on November 24, 2009 22:54:01_
What capability do you want added or improved? register an ontology such that the contents are added to a desired graph Where do you want this capability to be accessible? In the direct registration capability, issue `#214` What sort of input/command mechanism do you want? A parameter "graphId" for the POST request Other details of your desired capability? See enhancement issue `#214` I'm not setting a high priority for the moment as I think we can do tests
of the overall functionality in the semantic prototype by just using the
default graph in the ORR. Luis: please comment and adjust priority if
necessary.
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=225_ | priority | process graphid parameter in direct registration capability from on november what capability do you want added or improved register an ontology such that the contents are added to a desired graph where do you want this capability to be accessible in the direct registration capability issue what sort of input command mechanism do you want a parameter graphid for the post request other details of your desired capability see enhancement issue i m not setting a high priority for the moment as i think we can do tests of the overall functionality in the semantic prototype by just using the default graph in the orr luis please comment and adjust priority if necessary original issue | 1 |
188,292 | 6,774,957,883 | IssuesEvent | 2017-10-27 12:36:01 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Coverity issue seen with CID: 178235 | area: Networking bug priority: medium | Static code scan issues seen in File: /subsys/net/lib/dns/mdns_responder.c
Category: Null pointer dereferences
Function: send_response
Component: Networking
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | 1.0 | Coverity issue seen with CID: 178235 - Static code scan issues seen in File: /subsys/net/lib/dns/mdns_responder.c
Category: Null pointer dereferences
Function: send_response
Component: Networking
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | priority | coverity issue seen with cid static code scan issues seen in file subsys net lib dns mdns responder c category null pointer dereferences function send response component networking please fix or provide comments to square it off in coverity in the link | 1 |
28,198 | 2,700,417,072 | IssuesEvent | 2015-04-04 04:21:05 | NodineLegal/OpenLawOffice | https://api.github.com/repos/NodineLegal/OpenLawOffice | closed | Creating matters needs to check for possible duplicates | Priority : Medium Status : Confirmed Type : Enhancement | When creating a matter, possible duplicates need checked | 1.0 | Creating matters needs to check for possible duplicates - When creating a matter, possible duplicates need checked | priority | creating matters needs to check for possible duplicates when creating a matter possible duplicates need checked | 1 |
167,831 | 6,347,492,920 | IssuesEvent | 2017-07-28 07:13:49 | arquillian/smart-testing | https://api.github.com/repos/arquillian/smart-testing | closed | Ability to disable surefire provider using a flag | Component: Maven Priority: Medium Status: In Progress Type: Feature | ##### Issue Overview
There is no way of disabling `smart-testing-surefire-provider` other than commenting out / removing it from the `pom.xml`. We should implement simple flag switch so we can disable it in the same way as one can disable tests using `-DskipTests` or `-DskipITs`.
Our flag (e.g. `disableSmartTesting` and shorter form `disableST`) should follow the same convention
* if only name specified assume it's enabled
* if value (`true` or `false`) is provided it should be respected
| 1.0 | Ability to disable surefire provider using a flag - ##### Issue Overview
There is no way of disabling `smart-testing-surefire-provider` other than commenting out / removing it from the `pom.xml`. We should implement simple flag switch so we can disable it in the same way as one can disable tests using `-DskipTests` or `-DskipITs`.
Our flag (e.g. `disableSmartTesting` and shorter form `disableST`) should follow the same convention
* if only name specified assume it's enabled
* if value (`true` or `false`) is provided it should be respected
| priority | ability to disable surefire provider using a flag issue overview there is no way of disabling smart testing surefire provider other than commenting out removing it from the pom xml we should implement simple flag switch so we can disable it in the same way as one can disable tests using dskiptests or dskipits our flag e g disablesmarttesting and shorter form disablest should follow the same convention if only name specified assume it s enabled if value true or false is provided it should be respected | 1 |
54,831 | 3,071,423,590 | IssuesEvent | 2015-08-19 12:01:49 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | "Просмотреть список файлов" доработка подсветки папки с уже скачанными файлами | bug imported Priority-Medium | _From [kirill.B...@gmail.com](https://code.google.com/u/118374335061098442652/) on September 07, 2010 13:50:06_
Имеем:
Флай: r500beta17 x64
OS: Win7 x64
Описание:
При просмотре файл-листа через "Посмотреть список файлов", папки, содержащие уже скачанные файлы не подсвечиваются.
**Attachment:** [r500beta17_FolderWithDownloadedFileNotPaint.png](http://code.google.com/p/flylinkdc/issues/detail?id=159)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=159_ | 1.0 | "Просмотреть список файлов" доработка подсветки папки с уже скачанными файлами - _From [kirill.B...@gmail.com](https://code.google.com/u/118374335061098442652/) on September 07, 2010 13:50:06_
Имеем:
Флай: r500beta17 x64
OS: Win7 x64
Описание:
При просмотре файл-листа через "Посмотреть список файлов", папки, содержащие уже скачанные файлы не подсвечиваются.
**Attachment:** [r500beta17_FolderWithDownloadedFileNotPaint.png](http://code.google.com/p/flylinkdc/issues/detail?id=159)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=159_ | priority | просмотреть список файлов доработка подсветки папки с уже скачанными файлами from on september имеем флай os описание при просмотре файл листа через посмотреть список файлов папки содержащие уже скачанные файлы не подсвечиваются attachment original issue | 1 |
793,062 | 27,982,099,226 | IssuesEvent | 2023-03-26 09:18:39 | 7-lin/Final_Project_BE | https://api.github.com/repos/7-lin/Final_Project_BE | closed | [feat] 회원가입 기능생성과 관련된 예외처리 | For : API For : Backend Priority : Medium Status : In Progress Type : Feature | ## Description(설명)
form 회원가입 기능과 관련된 예외처리 기능을 구현한다.
## Tasks(New feature)
- [ ] 필요한 dto 생성
- [ ] repository 코드 추가
- [ ] service 코드 추가
- [ ] 예외처리 및 handler 생성
- [ ] security 설정
| 1.0 | [feat] 회원가입 기능생성과 관련된 예외처리 - ## Description(설명)
form 회원가입 기능과 관련된 예외처리 기능을 구현한다.
## Tasks(New feature)
- [ ] 필요한 dto 생성
- [ ] repository 코드 추가
- [ ] service 코드 추가
- [ ] 예외처리 및 handler 생성
- [ ] security 설정
| priority | 회원가입 기능생성과 관련된 예외처리 description 설명 form 회원가입 기능과 관련된 예외처리 기능을 구현한다 tasks new feature 필요한 dto 생성 repository 코드 추가 service 코드 추가 예외처리 및 handler 생성 security 설정 | 1 |
619,833 | 19,536,607,129 | IssuesEvent | 2021-12-31 08:42:58 | bounswe/2021SpringGroup7 | https://api.github.com/repos/bounswe/2021SpringGroup7 | opened | CF-44 Subcomment and Pin Features for Comments | Status: In Progress Priority: Medium Frontend | User shall be able to
- Comment on comments
- Pin comments for his posts | 1.0 | CF-44 Subcomment and Pin Features for Comments - User shall be able to
- Comment on comments
- Pin comments for his posts | priority | cf subcomment and pin features for comments user shall be able to comment on comments pin comments for his posts | 1 |
771,834 | 27,094,526,114 | IssuesEvent | 2023-02-15 00:56:30 | kevslinger/bot-be-named | https://api.github.com/repos/kevslinger/bot-be-named | closed | Support for Threads : New commands | enhancement Priority: Medium | Discord Threads are very flexible and useful, and I see a strong usecase for adding support for them soon.
Most of the chan commands would end up having some sort of thread counterpart too. Off the top of my head...
- ~makethread
- ~renamethread
- ~threadcrab
- ~threadlion | 1.0 | Support for Threads : New commands - Discord Threads are very flexible and useful, and I see a strong usecase for adding support for them soon.
Most of the chan commands would end up having some sort of thread counterpart too. Off the top of my head...
- ~makethread
- ~renamethread
- ~threadcrab
- ~threadlion | priority | support for threads new commands discord threads are very flexible and useful and i see a strong usecase for adding support for them soon most of the chan commands would end up having some sort of thread counterpart too off the top of my head makethread renamethread threadcrab threadlion | 1 |
40,519 | 2,868,925,167 | IssuesEvent | 2015-06-05 21:59:48 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Request to host package js-interop | bug Fixed Priority-Medium | <a href="https://github.com/vsmenon"><img src="https://avatars.githubusercontent.com/u/2119553?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [vsmenon](https://github.com/vsmenon)**
_Originally opened as dart-lang/sdk#5413_
----
The js-interop package is located at:
https://github.com/dart-lang/js-interop | 1.0 | Request to host package js-interop - <a href="https://github.com/vsmenon"><img src="https://avatars.githubusercontent.com/u/2119553?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [vsmenon](https://github.com/vsmenon)**
_Originally opened as dart-lang/sdk#5413_
----
The js-interop package is located at:
https://github.com/dart-lang/js-interop | priority | request to host package js interop issue by originally opened as dart lang sdk the js interop package is located at | 1 |
504,033 | 14,612,101,240 | IssuesEvent | 2020-12-22 05:17:27 | goldeimer/goldeimer | https://api.github.com/repos/goldeimer/goldeimer | opened | Improve SEO & analytics features | priority medium type feature | ## Goal
Our SEO strategy yields room for improvement.
## How
- Identify low-hanging fruits.
- Pick 'em.
## Metric
Progression of ranking for relevant keyword(s) over time. Method of tracking tbd.
## Relevant keywords
Incomplete list of relevant terms:
```
nachhaltiges Klopapier
Klopapier Abo
Kompostklo
organischer Dünger
Terra Preta selber machen
Pflanzenkohle kaufen
``` | 1.0 | Improve SEO & analytics features - ## Goal
Our SEO strategy yields room for improvement.
## How
- Identify low-hanging fruits.
- Pick 'em.
## Metric
Progression of ranking for relevant keyword(s) over time. Method of tracking tbd.
## Relevant keywords
Incomplete list of relevant terms:
```
nachhaltiges Klopapier
Klopapier Abo
Kompostklo
organischer Dünger
Terra Preta selber machen
Pflanzenkohle kaufen
``` | priority | improve seo analytics features goal our seo strategy yields room for improvement how identify low hanging fruits pick em metric progression of ranking for relevant keyword s over time method of tracking tbd relevant keywords incomplete list of relevant terms nachhaltiges klopapier klopapier abo kompostklo organischer dünger terra preta selber machen pflanzenkohle kaufen | 1 |
733,995 | 25,334,037,025 | IssuesEvent | 2022-11-18 15:24:31 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | No logging on the connector client. | Type: Bug Priority: Medium | **Describe the bug**
There appears to be no logging created by the connector client on the server where the client is installed. This makes troubleshooting impossible as to why it is not functioning.
**To Reproduce**
Steps to reproduce the behaviour:
1. Installed the connector
2. Configure connector to point at PF management IP
3. Check standard PF logging location. /usr/local/pf/logs
4. Nothing there.
**Expected behaviour**
We would expect to find a pfconnector-client.log with at least heartbeat information or something.
**Additional context**
Service is running and connected as this can be checked.
packetfence-pfconnector-remote.service - PacketFence Connector Client (Remote)
Loaded: loaded (/etc/systemd/system/packetfence-pfconnector-remote.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-09-30 09:12:55 CEST; 2h 52min ago
Main PID: 4067 (pfconnector)
Tasks: 8 (limit: 9505)
Memory: 18.1M
CPU: 2.176s
CGroup: /system.slice/packetfence-pfconnector-remote.service
└─4067 /usr/local/bin/pfconnector client ENV 127.0.0.1:22226:127.0.0.1:22226
Sep 30 09:12:55 XXXXXXXXXXXXX systemd[1]: Started PacketFence Connector Client (Remote).
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: TLS verification disabled
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: Connecting to wss://XXX.XXX.XXX.XXX:1443/api/v1/pfconnector/tunnel
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: tun: proxy#127.0.0.1:22226=>22226: Listening
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: Connected (Latency 12.281245ms)
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#80=>10.202.1.19:80: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#443=>10.202.1.19:443: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#1812=>10.202.1.19:1812/udp: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#1813=>10.202.1.19:1813/udp: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#1815=>10.202.1.19:1815/udp: Listening
| 1.0 | No logging on the connector client. - **Describe the bug**
There appears to be no logging created by the connector client on the server where the client is installed. This makes troubleshooting impossible as to why it is not functioning.
**To Reproduce**
Steps to reproduce the behaviour:
1. Installed the connector
2. Configure connector to point at PF management IP
3. Check standard PF logging location. /usr/local/pf/logs
4. Nothing there.
**Expected behaviour**
We would expect to find a pfconnector-client.log with at least heartbeat information or something.
**Additional context**
Service is running and connected as this can be checked.
packetfence-pfconnector-remote.service - PacketFence Connector Client (Remote)
Loaded: loaded (/etc/systemd/system/packetfence-pfconnector-remote.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-09-30 09:12:55 CEST; 2h 52min ago
Main PID: 4067 (pfconnector)
Tasks: 8 (limit: 9505)
Memory: 18.1M
CPU: 2.176s
CGroup: /system.slice/packetfence-pfconnector-remote.service
└─4067 /usr/local/bin/pfconnector client ENV 127.0.0.1:22226:127.0.0.1:22226
Sep 30 09:12:55 XXXXXXXXXXXXX systemd[1]: Started PacketFence Connector Client (Remote).
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: TLS verification disabled
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: Connecting to wss://XXX.XXX.XXX.XXX:1443/api/v1/pfconnector/tunnel
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: tun: proxy#127.0.0.1:22226=>22226: Listening
Sep 30 09:12:55 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:12:55 client: Connected (Latency 12.281245ms)
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#80=>10.202.1.19:80: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#443=>10.202.1.19:443: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#1812=>10.202.1.19:1812/udp: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#1813=>10.202.1.19:1813/udp: Listening
Sep 30 09:13:00 XXXXXXXXXXXXX pfconnector[4067]: 2022/09/30 09:13:00 client: tun: proxy#1815=>10.202.1.19:1815/udp: Listening
| priority | no logging on the connector client describe the bug there appears to be no logging created by the connector client on the server where the client is installed this makes troubleshooting impossible as to why it is not functioning to reproduce steps to reproduce the behaviour installed the connector configure connector to point at pf management ip check standard pf logging location usr local pf logs nothing there expected behaviour we would expect to find a pfconnector client log with at least heartbeat information or something additional context service is running and connected as this can be checked packetfence pfconnector remote service packetfence connector client remote loaded loaded etc systemd system packetfence pfconnector remote service enabled vendor preset enabled active active running since fri cest ago main pid pfconnector tasks limit memory cpu cgroup system slice packetfence pfconnector remote service └─ usr local bin pfconnector client env sep xxxxxxxxxxxxx systemd started packetfence connector client remote sep xxxxxxxxxxxxx pfconnector client tls verification disabled sep xxxxxxxxxxxxx pfconnector client connecting to wss xxx xxx xxx xxx api pfconnector tunnel sep xxxxxxxxxxxxx pfconnector client tun proxy listening sep xxxxxxxxxxxxx pfconnector client connected latency sep xxxxxxxxxxxxx pfconnector client tun proxy listening sep xxxxxxxxxxxxx pfconnector client tun proxy listening sep xxxxxxxxxxxxx pfconnector client tun proxy udp listening sep xxxxxxxxxxxxx pfconnector client tun proxy udp listening sep xxxxxxxxxxxxx pfconnector client tun proxy udp listening | 1 |
109,872 | 4,414,865,192 | IssuesEvent | 2016-08-13 18:14:12 | williewillus/Botania | https://api.github.com/repos/williewillus/Botania | closed | [1.10.2] Crash when breaking Botania "Special" flowers with Scythe from BoP | priority-medium | I'm running the latest 1.10.2 Botania build and whenever I attempt to break any "special" botania flower like the Pure Daisy with a Biomes' O Plenty Scythe I get this crash, http://pastebin.com/ECDDfdVx It is reproducible by multiple people running the All the Mods mpdpack, which you can find here, http://minecraft.curseforge.com/projects/all-the-mods | 1.0 | [1.10.2] Crash when breaking Botania "Special" flowers with Scythe from BoP - I'm running the latest 1.10.2 Botania build and whenever I attempt to break any "special" botania flower like the Pure Daisy with a Biomes' O Plenty Scythe I get this crash, http://pastebin.com/ECDDfdVx It is reproducible by multiple people running the All the Mods mpdpack, which you can find here, http://minecraft.curseforge.com/projects/all-the-mods | priority | crash when breaking botania special flowers with scythe from bop i m running the latest botania build and whenever i attempt to break any special botania flower like the pure daisy with a biomes o plenty scythe i get this crash it is reproducible by multiple people running the all the mods mpdpack which you can find here | 1 |
231,912 | 7,644,793,283 | IssuesEvent | 2018-05-08 16:30:09 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | closed | Persist issue dispositions in the attorney check out flow | In-Progress bug-medium-priority caseflow-queue foxtrot | An attorney received a case that his judge asked him to make revisions on. So, he had already entered issue dispositions in his initial check out to the judge. Upon starting the check out flow again after he got the case back, all of those dispositions were cleared. These issue dispositions should persist so the attorney does not have to do double work. This is like how we persist the attorney's issue dispositions for judge check out.
## AC
- If a case already has issue dispositions selected and the attorney initiates check out flow, those issue dispositions should be pre-populated. | 1.0 | Persist issue dispositions in the attorney check out flow - An attorney received a case that his judge asked him to make revisions on. So, he had already entered issue dispositions in his initial check out to the judge. Upon starting the check out flow again after he got the case back, all of those dispositions were cleared. These issue dispositions should persist so the attorney does not have to do double work. This is like how we persist the attorney's issue dispositions for judge check out.
## AC
- If a case already has issue dispositions selected and the attorney initiates check out flow, those issue dispositions should be pre-populated. | priority | persist issue dispositions in the attorney check out flow an attorney received a case that his judge asked him to make revisions on so he had already entered issue dispositions in his initial check out to the judge upon starting the check out flow again after he got the case back all of those dispositions were cleared these issue dispositions should persist so the attorney does not have to do double work this is like how we persist the attorney s issue dispositions for judge check out ac if a case already has issue dispositions selected and the attorney initiates check out flow those issue dispositions should be pre populated | 1 |
433,122 | 12,501,911,225 | IssuesEvent | 2020-06-02 02:47:05 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Enhancement - Replace Comment System to Actvity Block | feature: enhancement priority: medium | **Describe the solution you'd like**
https://www.loom.com/share/ec71af7b2bcb4e1bba46ab234d87ee3c
| 1.0 | Enhancement - Replace Comment System to Actvity Block - **Describe the solution you'd like**
https://www.loom.com/share/ec71af7b2bcb4e1bba46ab234d87ee3c
| priority | enhancement replace comment system to actvity block describe the solution you d like | 1 |
52,979 | 3,032,281,985 | IssuesEvent | 2015-08-05 07:44:39 | clementine-player/Clementine | https://api.github.com/repos/clementine-player/Clementine | closed | All album in one flac isn't played | bug imported Priority-Medium | _From [armaty...@gmail.com](https://code.google.com/u/117692321962641050001/) on April 19, 2012 00:29:25_
What steps will reproduce the problem? 1.Start clementine via console.
2.Manually add one-album flac. file.
3.Be amazed by flood output in console as follows:
ERROR GstEnginePipeline:506 167 "gstfilesrc.c(1055): gst_file_src_start (): /GstPipeline:pipeline/GstURIDecodeBin:uridecodebin-2822/GstFileSrc:source:
No such file "/media/Muzyka/Lacrimosa/lossless/1994-Schakal/Lacrimosa - Schakal.wav"" What is the expected output? What do you see instead? Actually be able to listen to the music from flac file :D What version of the product are you using? On what operating system? clementine 1.0
debian squeeze
all gstreamers plugins on board.
flac file is fine any other soft does not complain. Please provide any additional information below. After splitting the one-flac-album into one-flac-track clementine stops complaining. ^^
_Original issue: http://code.google.com/p/clementine-player/issues/detail?id=2884_ | 1.0 | All album in one flac isn't played - _From [armaty...@gmail.com](https://code.google.com/u/117692321962641050001/) on April 19, 2012 00:29:25_
What steps will reproduce the problem? 1.Start clementine via console.
2.Manually add one-album flac. file.
3.Be amazed by flood output in console as follows:
ERROR GstEnginePipeline:506 167 "gstfilesrc.c(1055): gst_file_src_start (): /GstPipeline:pipeline/GstURIDecodeBin:uridecodebin-2822/GstFileSrc:source:
No such file "/media/Muzyka/Lacrimosa/lossless/1994-Schakal/Lacrimosa - Schakal.wav"" What is the expected output? What do you see instead? Actually be able to listen to the music from flac file :D What version of the product are you using? On what operating system? clementine 1.0
debian squeeze
all gstreamers plugins on board.
flac file is fine any other soft does not complain. Please provide any additional information below. After splitting the one-flac-album into one-flac-track clementine stops complaining. ^^
_Original issue: http://code.google.com/p/clementine-player/issues/detail?id=2884_ | priority | all album in one flac isn t played from on april what steps will reproduce the problem start clementine via console manually add one album flac file be amazed by flood output in console as follows error gstenginepipeline gstfilesrc c gst file src start gstpipeline pipeline gsturidecodebin uridecodebin gstfilesrc source no such file media muzyka lacrimosa lossless schakal lacrimosa schakal wav what is the expected output what do you see instead actually be able to listen to the music from flac file d what version of the product are you using on what operating system clementine debian squeeze all gstreamers plugins on board flac file is fine any other soft does not complain please provide any additional information below after splitting the one flac album into one flac track clementine stops complaining original issue | 1 |
804,802 | 29,502,460,839 | IssuesEvent | 2023-06-03 00:21:02 | oppia/oppia-android | https://api.github.com/repos/oppia/oppia-android | reopened | Language selection screen [Blocked: #20, #44] | Type: Improvement Priority: Nice-to-have Impact: Medium Issue: Needs Clarification Issue: Needs Break-down ibt enhancement Work: Medium | There needs to be a page where the user can view all languages that the Oppia app supports and select which on it should be displayed in. This should be a per-profile setting that sets the language for the app, but not for the whole system. It should be easy to navigate to this screen. It should also be possible for the user to change the language only for the profile selection page temporarily as they log in. See the PRD for specifics. | 1.0 | Language selection screen [Blocked: #20, #44] - There needs to be a page where the user can view all languages that the Oppia app supports and select which on it should be displayed in. This should be a per-profile setting that sets the language for the app, but not for the whole system. It should be easy to navigate to this screen. It should also be possible for the user to change the language only for the profile selection page temporarily as they log in. See the PRD for specifics. | priority | language selection screen there needs to be a page where the user can view all languages that the oppia app supports and select which on it should be displayed in this should be a per profile setting that sets the language for the app but not for the whole system it should be easy to navigate to this screen it should also be possible for the user to change the language only for the profile selection page temporarily as they log in see the prd for specifics | 1 |
447,797 | 12,893,373,498 | IssuesEvent | 2020-07-13 21:29:27 | DSpace/dspace-angular | https://api.github.com/repos/DSpace/dspace-angular | opened | Embargo an archived Item | medium priority | From release plan spreadsheet
Estimate from release plan: none
Expressing interest: none
No additional notes | 1.0 | Embargo an archived Item - From release plan spreadsheet
Estimate from release plan: none
Expressing interest: none
No additional notes | priority | embargo an archived item from release plan spreadsheet estimate from release plan none expressing interest none no additional notes | 1 |
56,658 | 3,080,706,159 | IssuesEvent | 2015-08-22 00:54:25 | Nava2/umple-issue-test | https://api.github.com/repos/Nava2/umple-issue-test | opened | Implement the unique keyword | attributes Component-SemanticsAndGen contribSought Diffic-Med imported Priority-Medium Type-ProjectUG ucosp unique | _From [TimothyCLethbridge](https://code.google.com/u/TimothyCLethbridge/) on June 24, 2011 10:13:22_
Implement the unique constraint
class X
{
unique String a;
}
Resulting in code similar to (a mixture of Umple and Java)
public class X
{
private static ArrayList<Object>() allAs = new ArrayList<Object>();
//Add the following code injection before code generation
before setA { if (containsA(aA)) { return false; } }
before setA { String oldA = a; }
after setA { if (wasSet) { allAs.remove(oldA); allAs.add(a) } }
public static boolean containsA(String aA)
{
return allAs.contains(aA);
}
}
Testing of this should start with dev_umple and should implemented in Java, PHP and Ruby. This would be a good learning exercise.
_Original issue: http://code.google.com/p/umple/issues/detail?id=87_ | 1.0 | Implement the unique keyword - _From [TimothyCLethbridge](https://code.google.com/u/TimothyCLethbridge/) on June 24, 2011 10:13:22_
Implement the unique constraint
class X
{
unique String a;
}
Resulting in code similar to (a mixture of Umple and Java)
public class X
{
private static ArrayList<Object>() allAs = new ArrayList<Object>();
//Add the following code injection before code generation
before setA { if (containsA(aA)) { return false; } }
before setA { String oldA = a; }
after setA { if (wasSet) { allAs.remove(oldA); allAs.add(a) } }
public static boolean containsA(String aA)
{
return allAs.contains(aA);
}
}
Testing of this should start with dev_umple and should implemented in Java, PHP and Ruby. This would be a good learning exercise.
_Original issue: http://code.google.com/p/umple/issues/detail?id=87_ | priority | implement the unique keyword from on june implement the unique constraint class x unique string a resulting in code similar to a mixture of umple and java public class x private static arraylist allas new arraylist add the following code injection before code generation before seta if containsa aa return false before seta string olda a after seta if wasset allas remove olda allas add a public static boolean containsa string aa return allas contains aa testing of this should start with dev umple and should implemented in java php and ruby this would be a good learning exercise original issue | 1 |
730,604 | 25,181,439,403 | IssuesEvent | 2022-11-11 14:02:26 | AY2223S1-CS2103T-T09-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-T09-1/tp | closed | As a home-based business owner / reseller, I can store my transaction with suppliers / buyers | type.Story priority.Medium | so that I can track them electronically
| 1.0 | As a home-based business owner / reseller, I can store my transaction with suppliers / buyers - so that I can track them electronically
| priority | as a home based business owner reseller i can store my transaction with suppliers buyers so that i can track them electronically | 1 |
555,888 | 16,472,139,487 | IssuesEvent | 2021-05-23 16:21:48 | Team-uMigrate/umigrate | https://api.github.com/repos/Team-uMigrate/umigrate | opened | App: Sending messages in chat rooms | hard medium priority | Implement the ability to send messages between clients. When a user presses the send button, a new message object should be created in the DB (via the API) and it should show up on the recipient's device in real-time. | 1.0 | App: Sending messages in chat rooms - Implement the ability to send messages between clients. When a user presses the send button, a new message object should be created in the DB (via the API) and it should show up on the recipient's device in real-time. | priority | app sending messages in chat rooms implement the ability to send messages between clients when a user presses the send button a new message object should be created in the db via the api and it should show up on the recipient s device in real time | 1 |
720,677 | 24,801,232,050 | IssuesEvent | 2022-10-24 21:58:31 | Automattic/abacus | https://api.github.com/repos/Automattic/abacus | closed | Add absolute impact credible intervals | [!priority] medium [type] enhancement [section] experiment results [!team] explat [!milestone] current | What do we think about adding **absolute impact** credible intervals to the experiment results page like "# of users / month" and "$ in revenue / month"? This was originally suggested in this comment (pbmo2S-UZ-p2#comment-2110) and then analyzed with this method in this comment (pbxNRc-1qR-p2#comment-3353).
In addition to the first two stats, let's consider adding a third stat "absolute impact" (we can iterate on the name!):
- Absolute change: [ -0.02pp, +0.07pp ]
- Relative change: [ -1.63%, +5.04% ]
- Absolute impact: [ -188, +659 ] / month or [ -4,036, +14,149 ] / year
Design questions:
- [ ] Where do we add this "absolute impact"? Just another column in the analysis table? Replace "absolute change"?
- [ ] Do we allow toggle-able date ranges (like per month vs per year)?
| 1.0 | Add absolute impact credible intervals - What do we think about adding **absolute impact** credible intervals to the experiment results page like "# of users / month" and "$ in revenue / month"? This was originally suggested in this comment (pbmo2S-UZ-p2#comment-2110) and then analyzed with this method in this comment (pbxNRc-1qR-p2#comment-3353).
In addition to the first two stats, let's consider adding a third stat "absolute impact" (we can iterate on the name!):
- Absolute change: [ -0.02pp, +0.07pp ]
- Relative change: [ -1.63%, +5.04% ]
- Absolute impact: [ -188, +659 ] / month or [ -4,036, +14,149 ] / year
Design questions:
- [ ] Where do we add this "absolute impact"? Just another column in the analysis table? Replace "absolute change"?
- [ ] Do we allow toggle-able date ranges (like per month vs per year)?
| priority | add absolute impact credible intervals what do we think about adding absolute impact credible intervals to the experiment results page like of users month and in revenue month this was originally suggested in this comment uz comment and then analyzed with this method in this comment pbxnrc comment in addition to the first two stats let s consider adding a third stat absolute impact we can iterate on the name absolute change relative change absolute impact month or year design questions where do we add this absolute impact just another column in the analysis table replace absolute change do we allow toggle able date ranges like per month vs per year | 1 |
334,195 | 10,137,084,748 | IssuesEvent | 2019-08-02 14:29:56 | input-output-hk/cardano-wallet | https://api.github.com/repos/input-output-hk/cardano-wallet | closed | Implement `listTransactions` endpoint (no filtering) | PRIORITY[MEDIUM] | # Context
<!-- WHEN CREATED
What is the issue that we are seeing that is motivating this decision or change.
Give any elements that help understanding where this issue comes from. Leave no
room for suggestions or implicit deduction.
-->
The following endpoint is still missing but crucial for end-users:
https://input-output-hk.github.io/cardano-wallet/api/edge/#operation/listTransactions
# Decision
<!-- WHEN CREATED
Give details about the architectural decision and what it is doing. Be
extensive: use schemas and references when possible; do not hesitate to use
schemas and references when possible.
-->
The main difficulty on this endpoint lies in the `Range` parameter and the filtering and sorting requirements that goes behind it. This ticket leaves that aside and focus on implementing the endpoint "raw', returning all transactions in a non-paginated format.
# Acceptance Criteria
<!-- WHEN CREATED
Use standard vocabulary to describe requirement levels RFC-2119: Must-Should-May.
e.g.
1. The API _must_ support creation of wallets through a dedicated endpoint.
-->
1. `GET /api/v2/transactions` _must_ return a list of all known transactions
2. Transactions _must_ be sorted by descending date of insertion
---
# Development Plan
- [x] Make `selectTxHistory` sort by `slotId`
- [x] Add a function to the wallet layer that sorts and returns `[(Tx, TxMeta)]` (add a test)
- [x] Add a function `SlotId -> UTCTime`
- [x] Add a `listTransactions` function in the Api and make sure the endpoint is served. Because we now have `[(Tx, TxMeta)]` and `SlotId -> UTCTime` we can create a `[ApiTransaction]`.
- [ ] I believe we should have some integration tests. https://input-output-rnd.slack.com/archives/GBT05825V/p1564475587002600?thread_ts=1564381927.001200&cid=GBT05825V Aside from that I think we're good here.
# PR
| Number | Base |
| --- | --- |
| #497 | `master` |
| #524 | `master` |
# QA
<!-- WHEN IN PROGRESS
How are we covering acceptance criteria? Give here manual steps or point to
tests that are covering the technical decision we made.
-->
1. [x] Transaction sorting behaviour is captured in unit test models.
2. [x] There is an integration test that exercises the `/api/v2/transactions` endpoint
| 1.0 | Implement `listTransactions` endpoint (no filtering) - # Context
<!-- WHEN CREATED
What is the issue that we are seeing that is motivating this decision or change.
Give any elements that help understanding where this issue comes from. Leave no
room for suggestions or implicit deduction.
-->
The following endpoint is still missing but crucial for end-users:
https://input-output-hk.github.io/cardano-wallet/api/edge/#operation/listTransactions
# Decision
<!-- WHEN CREATED
Give details about the architectural decision and what it is doing. Be
extensive: use schemas and references when possible; do not hesitate to use
schemas and references when possible.
-->
The main difficulty on this endpoint lies in the `Range` parameter and the filtering and sorting requirements that goes behind it. This ticket leaves that aside and focus on implementing the endpoint "raw', returning all transactions in a non-paginated format.
# Acceptance Criteria
<!-- WHEN CREATED
Use standard vocabulary to describe requirement levels RFC-2119: Must-Should-May.
e.g.
1. The API _must_ support creation of wallets through a dedicated endpoint.
-->
1. `GET /api/v2/transactions` _must_ return a list of all known transactions
2. Transactions _must_ be sorted by descending date of insertion
---
# Development Plan
- [x] Make `selectTxHistory` sort by `slotId`
- [x] Add a function to the wallet layer that sorts and returns `[(Tx, TxMeta)]` (add a test)
- [x] Add a function `SlotId -> UTCTime`
- [x] Add a `listTransactions` function in the Api and make sure the endpoint is served. Because we now have `[(Tx, TxMeta)]` and `SlotId -> UTCTime` we can create a `[ApiTransaction]`.
- [ ] I believe we should have some integration tests. https://input-output-rnd.slack.com/archives/GBT05825V/p1564475587002600?thread_ts=1564381927.001200&cid=GBT05825V Aside from that I think we're good here.
# PR
| Number | Base |
| --- | --- |
| #497 | `master` |
| #524 | `master` |
# QA
<!-- WHEN IN PROGRESS
How are we covering acceptance criteria? Give here manual steps or point to
tests that are covering the technical decision we made.
-->
1. [x] Transaction sorting behaviour is captured in unit test models.
2. [x] There is an integration test that exercises the `/api/v2/transactions` endpoint
| priority | implement listtransactions endpoint no filtering context when created what is the issue that we are seeing that is motivating this decision or change give any elements that help understanding where this issue comes from leave no room for suggestions or implicit deduction the following endpoint is still missing but crucial for end users decision when created give details about the architectural decision and what it is doing be extensive use schemas and references when possible do not hesitate to use schemas and references when possible the main difficulty on this endpoint lies in the range parameter and the filtering and sorting requirements that goes behind it this ticket leaves that aside and focus on implementing the endpoint raw returning all transactions in a non paginated format acceptance criteria when created use standard vocabulary to describe requirement levels rfc must should may e g the api must support creation of wallets through a dedicated endpoint get api transactions must return a list of all known transactions transactions must be sorted by descending date of insertion development plan make selecttxhistory sort by slotid add a function to the wallet layer that sorts and returns add a test add a function slotid utctime add a listtransactions function in the api and make sure the endpoint is served because we now have and slotid utctime we can create a i believe we should have some integration tests aside from that i think we re good here pr number base master master qa when in progress how are we covering acceptance criteria give here manual steps or point to tests that are covering the technical decision we made transaction sorting behaviour is captured in unit test models there is an integration test that exercises the api transactions endpoint | 1 |
75,477 | 3,462,770,692 | IssuesEvent | 2015-12-21 03:46:11 | pentoo/pentoo-historical | https://api.github.com/repos/pentoo/pentoo-historical | closed | Pentoo minimal version | auto-migrated Priority-Medium Type-Enhancement | ```
I just hate it when I can't use my own laptop during a pentest and have to use
a crappy windows host instead.
We need a minimal pentoo with msf, nmap, burp and a few others in order to have
a small footprint iso/vm to deploy quickly during pentests.
Here's an attempt of a minimal use flag for the pentoo meta-package.
E17 was chosen arbitrarily but we can switch to xfce.
We'll probably need to change a few things in pentoo-system too.
Also, we will have to remove grsec and do a 32bit kernel only due to probable
hw limitation on random machines.
```
Original issue reported on code.google.com by `grimm...@pentoo.ch` on 5 Feb 2014 at 10:52
Attachments:
* [pentoo-minimal.patch](https://storage.googleapis.com/google-code-attachments/pentoo/issue-222/comment-0/pentoo-minimal.patch)
| 1.0 | Pentoo minimal version - ```
I just hate it when I can't use my own laptop during a pentest and have to use
a crappy windows host instead.
We need a minimal pentoo with msf, nmap, burp and a few others in order to have
a small footprint iso/vm to deploy quickly during pentests.
Here's an attempt of a minimal use flag for the pentoo meta-package.
E17 was chosen arbitrarily but we can switch to xfce.
We'll probably need to change a few things in pentoo-system too.
Also, we will have to remove grsec and do a 32bit kernel only due to probable
hw limitation on random machines.
```
Original issue reported on code.google.com by `grimm...@pentoo.ch` on 5 Feb 2014 at 10:52
Attachments:
* [pentoo-minimal.patch](https://storage.googleapis.com/google-code-attachments/pentoo/issue-222/comment-0/pentoo-minimal.patch)
| priority | pentoo minimal version i just hate it when i can t use my own laptop during a pentest and have to use a crappy windows host instead we need a minimal pentoo with msf nmap burp and a few others in order to have a small footprint iso vm to deploy quickly during pentests here s an attempt of a minimal use flag for the pentoo meta package was chosen arbitrarily but we can switch to xfce we ll probably need to change a few things in pentoo system too also we will have to remove grsec and do a kernel only due to probable hw limitation on random machines original issue reported on code google com by grimm pentoo ch on feb at attachments | 1 |
30,082 | 2,722,219,245 | IssuesEvent | 2015-04-14 01:10:39 | CruxFramework/crux-smart-faces | https://api.github.com/repos/CruxFramework/crux-smart-faces | closed | Close buttons do not appear on Internet Explorer 8 | bug imported Milestone-M14-C4 Module-CruxWidgets Priority-Medium TargetVersion-5.3.0 | _From [flavia.jesus@triggolabs.com](https://code.google.com/u/flavia.jesus@triggolabs.com/) on March 25, 2015 17:43:06_
Several close buttons do not appear on Internet Explorer 8, for exemple DialogViewContainer component.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=662_ | 1.0 | Close buttons do not appear on Internet Explorer 8 - _From [flavia.jesus@triggolabs.com](https://code.google.com/u/flavia.jesus@triggolabs.com/) on March 25, 2015 17:43:06_
Several close buttons do not appear on Internet Explorer 8, for exemple DialogViewContainer component.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=662_ | priority | close buttons do not appear on internet explorer from on march several close buttons do not appear on internet explorer for exemple dialogviewcontainer component original issue | 1 |
145,188 | 5,560,081,040 | IssuesEvent | 2017-03-24 18:30:46 | CS2103JAN2017-W14-B4/main | https://api.github.com/repos/CS2103JAN2017-W14-B4/main | closed | As a power user I can map standard commands to my preferred shortcut commands | priority.medium type.story | so I can be familiar with my own modified commands | 1.0 | As a power user I can map standard commands to my preferred shortcut commands - so I can be familiar with my own modified commands | priority | as a power user i can map standard commands to my preferred shortcut commands so i can be familiar with my own modified commands | 1 |
126,497 | 4,996,537,166 | IssuesEvent | 2016-12-09 14:12:55 | softdevteam/krun | https://api.github.com/repos/softdevteam/krun | opened | Post-exec commands | bug medium priority (a clear improvement but not a blocker for publication) | The post-exec commands run directly after every benchmark has completed, and typically include commands to bring the network back up and scp the current data and logs to a different machine.
Because post-execs run *directly* after the benchmark is complete, the log that is tarred up and scp'd does not contain some very useful information (such as ETAs) which are computed between the benchmark completing and the next reboot.
Additionally, data from the *last* benchmark to run is not included in the scp tarball, which is confusing for the user. | 1.0 | Post-exec commands - The post-exec commands run directly after every benchmark has completed, and typically include commands to bring the network back up and scp the current data and logs to a different machine.
Because post-execs run *directly* after the benchmark is complete, the log that is tarred up and scp'd does not contain some very useful information (such as ETAs) which are computed between the benchmark completing and the next reboot.
Additionally, data from the *last* benchmark to run is not included in the scp tarball, which is confusing for the user. | priority | post exec commands the post exec commands run directly after every benchmark has completed and typically include commands to bring the network back up and scp the current data and logs to a different machine because post execs run directly after the benchmark is complete the log that is tarred up and scp d does not contain some very useful information such as etas which are computed between the benchmark completing and the next reboot additionally data from the last benchmark to run is not included in the scp tarball which is confusing for the user | 1 |
623,435 | 19,667,726,106 | IssuesEvent | 2022-01-11 01:27:13 | cdklabs/construct-hub-webapp | https://api.github.com/repos/cdklabs/construct-hub-webapp | closed | Inconsistent display of constructor arguments between python and typescript references | effort/medium priority/p2 risk/medium stale | The `Python` and `TypeScript` experience differ when displaying the information needed to initialize a construct/class.
In Python, the initializer looks like so:
<img width="401" alt="Screen Shot 2021-06-20 at 6 38 01 PM" src="https://user-images.githubusercontent.com/1428812/122680200-c9a31400-d1f6-11eb-8540-628b501f79ca.png">
Note that all properties needed to initialize `AwsAuth` are directly displayed in the initializer section.
In TypeScript, the same initializer looks like:
<img width="577" alt="Screen Shot 2021-06-20 at 6 36 30 PM" src="https://user-images.githubusercontent.com/1428812/122680256-f6efc200-d1f6-11eb-9930-1eb03a97c3b6.png">
Note that instead of the `cluster` argument, it displays `AwsAuthProps`.
This is a direct consequence of the fact that in python, structs are flattened out into their individual properties. This means that a typescript user has to click on `AwsAuthProps` to understand which properties need to be passed into the initializer.
While this is technically accurate, it creates an inconsistent experience, and is actually a downgrade from the typescript API reference we currently have for `@aws-cdk/*` packages, where a table of properties is displayed immediately in the initializer level, without the extra hop.
<img width="723" alt="Screen Shot 2021-06-20 at 6 44 42 PM" src="https://user-images.githubusercontent.com/1428812/122680396-99a84080-d1f7-11eb-8b7c-38e6ad978d1f.png">
| 1.0 | Inconsistent display of constructor arguments between python and typescript references - The `Python` and `TypeScript` experience differ when displaying the information needed to initialize a construct/class.
In Python, the initializer looks like so:
<img width="401" alt="Screen Shot 2021-06-20 at 6 38 01 PM" src="https://user-images.githubusercontent.com/1428812/122680200-c9a31400-d1f6-11eb-8540-628b501f79ca.png">
Note that all properties needed to initialize `AwsAuth` are directly displayed in the initializer section.
In TypeScript, the same initializer looks like:
<img width="577" alt="Screen Shot 2021-06-20 at 6 36 30 PM" src="https://user-images.githubusercontent.com/1428812/122680256-f6efc200-d1f6-11eb-9930-1eb03a97c3b6.png">
Note that instead of the `cluster` argument, it displays `AwsAuthProps`.
This is a direct consequence of the fact that in python, structs are flattened out into their individual properties. This means that a typescript user has to click on `AwsAuthProps` to understand which properties need to be passed into the initializer.
While this is technically accurate, it creates an inconsistent experience, and is actually a downgrade from the typescript API reference we currently have for `@aws-cdk/*` packages, where a table of properties is displayed immediately in the initializer level, without the extra hop.
<img width="723" alt="Screen Shot 2021-06-20 at 6 44 42 PM" src="https://user-images.githubusercontent.com/1428812/122680396-99a84080-d1f7-11eb-8b7c-38e6ad978d1f.png">
| priority | inconsistent display of constructor arguments between python and typescript references the python and typescript experience differ when displaying the information needed to initialize a construct class in python the initializer looks like so img width alt screen shot at pm src note that all properties needed to initialize awsauth are directly displayed in the initializer section in typescript the same initializer looks like img width alt screen shot at pm src note that instead of the cluster argument it displays awsauthprops this is a direct consequence of the fact that in python structs are flattened out into their individual properties this means that a typescript user has to click on awsauthprops to understand which properties need to be passed into the initializer while this is technically accurate it creates an inconsistent experience and is actually a downgrade from the typescript api reference we currently have for aws cdk packages where a table of properties is displayed immediately in the initializer level without the extra hop img width alt screen shot at pm src | 1 |
55,366 | 3,073,043,198 | IssuesEvent | 2015-08-19 19:56:31 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Configurable TIMEOUT values | bug imported Priority-Medium | _From [glenview...@gmail.com](https://code.google.com/u/110087215095127878251/) on August 22, 2012 10:52:21_
It would be great if Robotium offered setTimeout(int timeout), setSmallTimeout(int smallTimeout) methods.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=314_ | 1.0 | Configurable TIMEOUT values - _From [glenview...@gmail.com](https://code.google.com/u/110087215095127878251/) on August 22, 2012 10:52:21_
It would be great if Robotium offered setTimeout(int timeout), setSmallTimeout(int smallTimeout) methods.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=314_ | priority | configurable timeout values from on august it would be great if robotium offered settimeout int timeout setsmalltimeout int smalltimeout methods original issue | 1 |
266,223 | 8,364,394,646 | IssuesEvent | 2018-10-03 22:50:49 | theAsmodai/metamod-r | https://api.github.com/repos/theAsmodai/metamod-r | closed | Crashing server with steam-Sven Coop when installing metamod-r | OS: Windows Priority: Medium Status: Pending Type: Bug Type: Help wanted | Please help, REHLDS is not working and not search game dll :( | 1.0 | Crashing server with steam-Sven Coop when installing metamod-r - Please help, REHLDS is not working and not search game dll :( | priority | crashing server with steam sven coop when installing metamod r please help rehlds is not working and not search game dll | 1 |
146,429 | 5,621,383,712 | IssuesEvent | 2017-04-04 09:48:46 | linux-audit/audit-userspace | https://api.github.com/repos/linux-audit/audit-userspace | opened | BUG: errormsg descriptions drifting or overloaded | bug priority/medium | A number of error message descriptions have drifted from the conditions that
caused them in audit_rule_fieldpair_data() including expansion of fields to be used by the user filter list, restriction to the exit list only and changing an
operator to "equals" only. Correct these, using the new errormsg macros.
Several return codes were overloaded and no longer giving helpful error
return messages from the field and comparison functions audit_rule_fieldpair_data() and audit_rule_interfield_comp_data().
Introduce 3 new macros with more helpful error descriptions for data
missing, incompatible fields and incompatible values. | 1.0 | BUG: errormsg descriptions drifting or overloaded - A number of error message descriptions have drifted from the conditions that
caused them in audit_rule_fieldpair_data() including expansion of fields to be used by the user filter list, restriction to the exit list only and changing an
operator to "equals" only. Correct these, using the new errormsg macros.
Several return codes were overloaded and no longer giving helpful error
return messages from the field and comparison functions audit_rule_fieldpair_data() and audit_rule_interfield_comp_data().
Introduce 3 new macros with more helpful error descriptions for data
missing, incompatible fields and incompatible values. | priority | bug errormsg descriptions drifting or overloaded a number of error message descriptions have drifted from the conditions that caused them in audit rule fieldpair data including expansion of fields to be used by the user filter list restriction to the exit list only and changing an operator to equals only correct these using the new errormsg macros several return codes were overloaded and no longer giving helpful error return messages from the field and comparison functions audit rule fieldpair data and audit rule interfield comp data introduce new macros with more helpful error descriptions for data missing incompatible fields and incompatible values | 1 |
455,140 | 13,112,435,684 | IssuesEvent | 2020-08-05 02:12:07 | Seamonster778778778788/SQBeyondPublic | https://api.github.com/repos/Seamonster778778778788/SQBeyondPublic | closed | Paid twice for quests | bug medium priority | I bought a fighter at spawn in order to do the 'leave spawn' quest, but I was informed by someone that I needed to click on the [step-by-step guide] button in the book first and had to buy another one. (IDK if this is correct)
I bought a fighter, followed the instructions the quest gave me, and when I completed it the 'quest completed' message appeared twice and I appeared to have gotten 20K instead of the 10K that was the normal quest reward.

Later I did the cryopod quest, this time by clicking on the step-by-step guide button before doing anything, and when I broke and re-made my cryopod the 'quest completed' message appeared again and I was (I think) also paid again. (Forgot to screenshot this time)
| 1.0 | Paid twice for quests - I bought a fighter at spawn in order to do the 'leave spawn' quest, but I was informed by someone that I needed to click on the [step-by-step guide] button in the book first and had to buy another one. (IDK if this is correct)
I bought a fighter, followed the instructions the quest gave me, and when I completed it the 'quest completed' message appeared twice and I appeared to have gotten 20K instead of the 10K that was the normal quest reward.

Later I did the cryopod quest, this time by clicking on the step-by-step guide button before doing anything, and when I broke and re-made my cryopod the 'quest completed' message appeared again and I was (I think) also paid again. (Forgot to screenshot this time)
| priority | paid twice for quests i bought a fighter at spawn in order to do the leave spawn quest but i was informed by someone that i needed to click on the button in the book first and had to buy another one idk if this is correct i bought a fighter followed the instructions the quest gave me and when i completed it the quest completed message appeared twice and i appeared to have gotten instead of the that was the normal quest reward later i did the cryopod quest this time by clicking on the step by step guide button before doing anything and when i broke and re made my cryopod the quest completed message appeared again and i was i think also paid again forgot to screenshot this time | 1 |
569,076 | 16,993,992,980 | IssuesEvent | 2021-07-01 02:22:29 | cjs8487/SS-Randomizer-Tracker | https://api.github.com/repos/cjs8487/SS-Randomizer-Tracker | closed | Dungeon Panel Rework | Medium Priority enhancement | Rework the Dungeon Panel in order to better use the space and include more features.
Additional Features:
- Icon/button to click to open the dungeons location list
Layout Tweaks:
- Entrance Rando
- Combine Small Keys and Boss Keys onto a single row
- Combine Entered and Required onto a second row
- Without entrance rando
- Combine Small Keys, Boss Key, and Required onto a single row | 1.0 | Dungeon Panel Rework - Rework the Dungeon Panel in order to better use the space and include more features.
Additional Features:
- Icon/button to click to open the dungeons location list
Layout Tweaks:
- Entrance Rando
- Combine Small Keys and Boss Keys onto a single row
- Combine Entered and Required onto a second row
- Without entrance rando
- Combine Small Keys, Boss Key, and Required onto a single row | priority | dungeon panel rework rework the dungeon panel in order to better use the space and include more features additional features icon button to click to open the dungeons location list layout tweaks entrance rando combine small keys and boss keys onto a single row combine entered and required onto a second row without entrance rando combine small keys boss key and required onto a single row | 1 |
752,103 | 26,273,286,205 | IssuesEvent | 2023-01-06 19:11:44 | minio/mc | https://api.github.com/repos/minio/mc | closed | `mc mirror` without `--overwrite` does not continue syncing other objects after one fails to synchronize | community priority: medium | ## Expected behavior
As the [docs](https://docs.min.io/minio/baremetal/reference/minio-mc/mc-mirror.html#mc.mirror.-overwrite) state and as one would expect using common sense:
> Without --overwrite, if an object already exists on the Destination, the mirror process fails to synchronize that object. mc mirror logs an error and continues to synchronize other objects.
## Actual behavior
It does not, in fact, continues to synchronize other objects.
## Steps to reproduce the behavior
```sh
$ touch a.txt
$ mc mirror --json . r2/test # mirrors a.txt successfully
{
"status": "success",
"source": "/home/.../a.txt",
"target": "r2/test/a.txt",
"size": 0,
"totalCount": 1,
"totalSize": 0
}
{
"status": "success",
"total": 0,
"transferred": 0,
"speed": 0
}
$ touch a.txt b.txt
$ mc mirror --json . r2/test # does NOT mirror b.txt because a.txt was updated (even though it clearly has a status of success)
{
"status": "success",
"source": "/home/.../b.txt",
"target": "r2/test/b.txt",
"size": 0,
"totalCount": 1,
"totalSize": 0
}
{
"status": "error",
"error": {
"message": "Failed to perform mirroring, with error condition (mm-source-mtime)",
"cause": {
"message": "Overwrite not allowed for `https://.../test/a.txt`. Use `--overwrite` to override this behavior.",
"error": {}
},
"type": "error"
}
}
{
"status": "success",
"total": 0,
"transferred": 0,
"speed": 0
}
$ mc ls r2/test # only a.txt exists
[2022-08-15 19:35:50 CEST] 0B STANDARD a.txt
```
## mc --version
- mc version RELEASE.2022-08-11T00-30-48Z (commit-id=c2c2ab4299bbb243c55644984392f1c39af499cf)
And no, I do not want to enable `--overwrite`.
| 1.0 | `mc mirror` without `--overwrite` does not continue syncing other objects after one fails to synchronize - ## Expected behavior
As the [docs](https://docs.min.io/minio/baremetal/reference/minio-mc/mc-mirror.html#mc.mirror.-overwrite) state and as one would expect using common sense:
> Without --overwrite, if an object already exists on the Destination, the mirror process fails to synchronize that object. mc mirror logs an error and continues to synchronize other objects.
## Actual behavior
It does not, in fact, continues to synchronize other objects.
## Steps to reproduce the behavior
```sh
$ touch a.txt
$ mc mirror --json . r2/test # mirrors a.txt successfully
{
"status": "success",
"source": "/home/.../a.txt",
"target": "r2/test/a.txt",
"size": 0,
"totalCount": 1,
"totalSize": 0
}
{
"status": "success",
"total": 0,
"transferred": 0,
"speed": 0
}
$ touch a.txt b.txt
$ mc mirror --json . r2/test # does NOT mirror b.txt because a.txt was updated (even though it clearly has a status of success)
{
"status": "success",
"source": "/home/.../b.txt",
"target": "r2/test/b.txt",
"size": 0,
"totalCount": 1,
"totalSize": 0
}
{
"status": "error",
"error": {
"message": "Failed to perform mirroring, with error condition (mm-source-mtime)",
"cause": {
"message": "Overwrite not allowed for `https://.../test/a.txt`. Use `--overwrite` to override this behavior.",
"error": {}
},
"type": "error"
}
}
{
"status": "success",
"total": 0,
"transferred": 0,
"speed": 0
}
$ mc ls r2/test # only a.txt exists
[2022-08-15 19:35:50 CEST] 0B STANDARD a.txt
```
## mc --version
- mc version RELEASE.2022-08-11T00-30-48Z (commit-id=c2c2ab4299bbb243c55644984392f1c39af499cf)
And no, I do not want to enable `--overwrite`.
| priority | mc mirror without overwrite does not continue syncing other objects after one fails to synchronize expected behavior as the state and as one would expect using common sense without overwrite if an object already exists on the destination the mirror process fails to synchronize that object mc mirror logs an error and continues to synchronize other objects actual behavior it does not in fact continues to synchronize other objects steps to reproduce the behavior sh touch a txt mc mirror json test mirrors a txt successfully status success source home a txt target test a txt size totalcount totalsize status success total transferred speed touch a txt b txt mc mirror json test does not mirror b txt because a txt was updated even though it clearly has a status of success status success source home b txt target test b txt size totalcount totalsize status error error message failed to perform mirroring with error condition mm source mtime cause message overwrite not allowed for use overwrite to override this behavior error type error status success total transferred speed mc ls test only a txt exists standard a txt mc version mc version release commit id and no i do not want to enable overwrite | 1 |
231,711 | 7,642,345,598 | IssuesEvent | 2018-05-08 08:57:31 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | ctx.session inside policies is always an empty object? | priority: medium status: need more informations 🤔 type: bug 🐛 | <!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 9.x.x -->
<!-- npm 5.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**:
9.11.1
- **npm version**:
5.6.0
- **Strapi version**:
3.0.0-alpha.12.1.2
- **Database**:
MongoDB
- **Operating system**:
Windows 10
**What is the current behavior?**
`ctx.session` inside of policies is currently an empty `{}` object
**What is the expected behavior?**
https://strapi.io/documentation/guides/policies.html indicates that we should be able to access the session inside policies.
---
- [x] I'm sure that this feature hasn't already been referenced.
| 1.0 | ctx.session inside policies is always an empty object? - <!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 9.x.x -->
<!-- npm 5.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**:
9.11.1
- **npm version**:
5.6.0
- **Strapi version**:
3.0.0-alpha.12.1.2
- **Database**:
MongoDB
- **Operating system**:
Windows 10
**What is the current behavior?**
`ctx.session` inside of policies is currently an empty `{}` object
**What is the expected behavior?**
https://strapi.io/documentation/guides/policies.html indicates that we should be able to access the session inside policies.
---
- [x] I'm sure that this feature hasn't already been referenced.
| priority | ctx session inside policies is always an empty object informations node js version npm version strapi version alpha database mongodb operating system windows what is the current behavior ctx session inside of policies is currently an empty object what is the expected behavior indicates that we should be able to access the session inside policies i m sure that this feature hasn t already been referenced | 1 |
597,771 | 18,171,381,237 | IssuesEvent | 2021-09-27 20:27:26 | CanberraOceanRacingClub/namadgi3 | https://api.github.com/repos/CanberraOceanRacingClub/namadgi3 | closed | Finalise arrangement for headsail double luff tracks | priority 2: Medium Working bee | Sam tells me:
* We have the foil installed
* We need our headsails measure (why?)
* Phil may have the fitting that is used to load the sails -- it needs to be returned to the boat
More investigation required.
@peterottesen might know more details. | 1.0 | Finalise arrangement for headsail double luff tracks - Sam tells me:
* We have the foil installed
* We need our headsails measure (why?)
* Phil may have the fitting that is used to load the sails -- it needs to be returned to the boat
More investigation required.
@peterottesen might know more details. | priority | finalise arrangement for headsail double luff tracks sam tells me we have the foil installed we need our headsails measure why phil may have the fitting that is used to load the sails it needs to be returned to the boat more investigation required peterottesen might know more details | 1 |
30,589 | 2,724,295,737 | IssuesEvent | 2015-04-14 17:02:24 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | Crux report error when using Html Entities in Views | bug imported Priority-Medium | _From [moac...@gmail.com](https://code.google.com/u/116048952297984795716/) on July 09, 2013 15:43:57_
What steps will reproduce the problem? 1. Create a Div Element and put a Html Entity like " " What is the expected output? What do you see instead? The following error is reported:
The entity "nbsp" was referenced, but not declared. What version of the product are you using? On what operating system? On what Browser? Crux 5.0.0.2-initial
Windows7
Chrome
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=281_ | 1.0 | Crux report error when using Html Entities in Views - _From [moac...@gmail.com](https://code.google.com/u/116048952297984795716/) on July 09, 2013 15:43:57_
What steps will reproduce the problem? 1. Create a Div Element and put a Html Entity like " " What is the expected output? What do you see instead? The following error is reported:
The entity "nbsp" was referenced, but not declared. What version of the product are you using? On what operating system? On what Browser? Crux 5.0.0.2-initial
Windows7
Chrome
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=281_ | priority | crux report error when using html entities in views from on july what steps will reproduce the problem create a div element and put a html entity like nbsp what is the expected output what do you see instead the following error is reported the entity nbsp was referenced but not declared what version of the product are you using on what operating system on what browser crux initial chrome original issue | 1 |
51,848 | 3,014,630,977 | IssuesEvent | 2015-07-29 15:39:32 | jpchanson/BeSeenium | https://api.github.com/repos/jpchanson/BeSeenium | opened | Feedback to web interface | Core functionality Medium Priority | Need some way of getting the output back onto the web interface, perhaps by dropping a flatfile that can be picked up with something on the web end (data enclosed in iframe for instance) | 1.0 | Feedback to web interface - Need some way of getting the output back onto the web interface, perhaps by dropping a flatfile that can be picked up with something on the web end (data enclosed in iframe for instance) | priority | feedback to web interface need some way of getting the output back onto the web interface perhaps by dropping a flatfile that can be picked up with something on the web end data enclosed in iframe for instance | 1 |
59,535 | 3,114,046,535 | IssuesEvent | 2015-09-03 05:34:08 | cs2103aug2015-t16-1j/main | https://api.github.com/repos/cs2103aug2015-t16-1j/main | opened | As a user, I can auto-save/auto-sync every certain time interval | priority.medium type.story | so that there will not be any info-loss accidentally
| 1.0 | As a user, I can auto-save/auto-sync every certain time interval - so that there will not be any info-loss accidentally
| priority | as a user i can auto save auto sync every certain time interval so that there will not be any info loss accidentally | 1 |
772,654 | 27,130,985,225 | IssuesEvent | 2023-02-16 09:46:22 | LuanRT/YouTube.js | https://api.github.com/repos/LuanRT/YouTube.js | closed | Add support for hashtag page | enhancement priority: medium | ### Describe your suggestion
add a getHashtag method to get info from a hashtag page (ex: https://www.youtube.com/hashtag/shorts )
### Other details
I tried to implement myself this but I do not know enough about protobuf:
`execute('/browse', { browseId: 'FEhashtag', params: 'protobuf hashtag'})`
### Checklist
- [X] I am running the latest version.
- [X] I checked the documentation and found no answer.
- [X] I have searched the existing issues and made sure this is not a duplicate.
- [X] I have provided sufficient information. | 1.0 | Add support for hashtag page - ### Describe your suggestion
add a getHashtag method to get info from a hashtag page (ex: https://www.youtube.com/hashtag/shorts )
### Other details
I tried to implement myself this but I do not know enough about protobuf:
`execute('/browse', { browseId: 'FEhashtag', params: 'protobuf hashtag'})`
### Checklist
- [X] I am running the latest version.
- [X] I checked the documentation and found no answer.
- [X] I have searched the existing issues and made sure this is not a duplicate.
- [X] I have provided sufficient information. | priority | add support for hashtag page describe your suggestion add a gethashtag method to get info from a hashtag page ex other details i tried to implement myself this but i do not know enough about protobuf execute browse browseid fehashtag params protobuf hashtag checklist i am running the latest version i checked the documentation and found no answer i have searched the existing issues and made sure this is not a duplicate i have provided sufficient information | 1 |
746,557 | 26,035,030,106 | IssuesEvent | 2022-12-22 03:25:02 | EthicalSoftwareCommunity/HippieVerse-Game | https://api.github.com/repos/EthicalSoftwareCommunity/HippieVerse-Game | closed | Add shield in HF | enhancement MEDIUM PRIORITY HF (HippieFall) | The player will pick up the shield during the fall.
Once activated, the shield will be available for Nth amount of time.
It gives resistance to any damage:
to collisions
to traps | 1.0 | Add shield in HF - The player will pick up the shield during the fall.
Once activated, the shield will be available for Nth amount of time.
It gives resistance to any damage:
to collisions
to traps | priority | add shield in hf the player will pick up the shield during the fall once activated the shield will be available for nth amount of time it gives resistance to any damage to collisions to traps | 1 |
760,197 | 26,633,119,446 | IssuesEvent | 2023-01-24 19:28:35 | pdx-blurp/blurp-frontend | https://api.github.com/repos/pdx-blurp/blurp-frontend | closed | Map page: create a left-hand sidebar for Map Tools | new feature medium priority | * Left-hand sidebar by default is open/expanded
* Fills height of screen on left side
* When collapsed, it will show a single column of icons
* To collapse and expand it will have an arrow icon [ < ] or [ > ] at the top of the sidebar | 1.0 | Map page: create a left-hand sidebar for Map Tools - * Left-hand sidebar by default is open/expanded
* Fills height of screen on left side
* When collapsed, it will show a single column of icons
* To collapse and expand it will have an arrow icon [ < ] or [ > ] at the top of the sidebar | priority | map page create a left hand sidebar for map tools left hand sidebar by default is open expanded fills height of screen on left side when collapsed it will show a single column of icons to collapse and expand it will have an arrow icon or at the top of the sidebar | 1 |
775,120 | 27,219,626,831 | IssuesEvent | 2023-02-21 03:19:32 | Avaiga/taipy-core | https://api.github.com/repos/Avaiga/taipy-core | opened | Implementation data migration API on production and experiment mode | Core: Versioning 🟨 Priority: Medium ✨New feature | **What would that feature address**
When there is multiple version of a Taipy core application on production environment, when there is major conflict, it is not possible to run (or get) old entities.
***Description of the ideal solution***
1. On experiment mode
When the user want to update an existing experiment version:
- Without the `--force` option:
→ Prompt the ConfigComparator output
→ Prompt a suggestion: “Add a new experiment version with the migration function or Use the `--force` option to override the Config of the current version”.
- No support for the `--force` option:
→ If the user wants to override an experiment, then he/she should delete the version and create a new one with the same name (which is a bit restrictive). This should be clarified on the doc.
There is no support for migrating experiment entities at the moment.
Because the purpose of an experiment version is to store a stable version of the application, and if the Config is forced to update, then it is not really stable.
2. On production mode
- When the user want to update an existing production version:
- Without the `--force` option:
→ Prompt the ConfigComparator output
→ Prompt there are migration functions that need to be updated as well.
→ Prompt suggestions: “Add a new production version with the migration function or Use the `--force` option to override the current version and don’t forget to update the migration functions to this version as well”.
- With the `--force` option:
→ Only force update the Config on the current version, not the older ones.
→ Prompt out a warning that says that you need to update the migration function as well.
→ Migrate entities of the older on the fly (when there are conflicts) using the migration method provided by the users.
- When the user pushes a new production version:
- Ignore the `--force` option.
- We only need to apply the ConfigComparator with the latest production version before this.
We check if there is any migration function or not, if there is not then print out a Warning.
***Caveats***
With `job_executionmode="standalone"`, we need to make sure that the migration function is applied to entities on subprocess as well. | 1.0 | Implementation data migration API on production and experiment mode - **What would that feature address**
When there is multiple version of a Taipy core application on production environment, when there is major conflict, it is not possible to run (or get) old entities.
***Description of the ideal solution***
1. On experiment mode
When the user want to update an existing experiment version:
- Without the `--force` option:
→ Prompt the ConfigComparator output
→ Prompt a suggestion: “Add a new experiment version with the migration function or Use the `--force` option to override the Config of the current version”.
- No support for the `--force` option:
→ If the user wants to override an experiment, then he/she should delete the version and create a new one with the same name (which is a bit restrictive). This should be clarified on the doc.
There is no support for migrating experiment entities at the moment.
Because the purpose of an experiment version is to store a stable version of the application, and if the Config is forced to update, then it is not really stable.
2. On production mode
- When the user want to update an existing production version:
- Without the `--force` option:
→ Prompt the ConfigComparator output
→ Prompt there are migration functions that need to be updated as well.
→ Prompt suggestions: “Add a new production version with the migration function or Use the `--force` option to override the current version and don’t forget to update the migration functions to this version as well”.
- With the `--force` option:
→ Only force update the Config on the current version, not the older ones.
→ Prompt out a warning that says that you need to update the migration function as well.
→ Migrate entities of the older on the fly (when there are conflicts) using the migration method provided by the users.
- When the user pushes a new production version:
- Ignore the `--force` option.
- We only need to apply the ConfigComparator with the latest production version before this.
We check if there is any migration function or not, if there is not then print out a Warning.
***Caveats***
With `job_executionmode="standalone"`, we need to make sure that the migration function is applied to entities on subprocess as well. | priority | implementation data migration api on production and experiment mode what would that feature address when there is multiple version of a taipy core application on production environment when there is major conflict it is not possible to run or get old entities description of the ideal solution on experiment mode when the user want to update an existing experiment version without the force option rarr prompt the configcomparator output rarr prompt a suggestion “add a new experiment version with the migration function or use the force option to override the config of the current version” no support for the force option rarr if the user wants to override an experiment then he she should delete the version and create a new one with the same name which is a bit restrictive this should be clarified on the doc there is no support for migrating experiment entities at the moment because the purpose of an experiment version is to store a stable version of the application and if the config is forced to update then it is not really stable on production mode when the user want to update an existing production version without the force option rarr prompt the configcomparator output rarr prompt there are migration functions that need to be updated as well rarr prompt suggestions “add a new production version with the migration function or use the force option to override the current version and don’t forget to update the migration functions to this version as well” with the force option rarr only force update the config on the current version not the older ones rarr prompt out a warning that says that you need to update the migration function as well rarr migrate entities of the older on the fly when there are conflicts using the migration method provided by the users when the user pushes a new production version ignore the force option we only need to apply the configcomparator with the latest production version before this we check if there is any migration function or not if there is not then print out a warning caveats with job executionmode standalone we need to make sure that the migration function is applied to entities on subprocess as well | 1 |
554,410 | 16,420,052,796 | IssuesEvent | 2021-05-19 11:28:52 | theseion/Fuel | https://api.github.com/repos/theseion/Fuel | closed | Pharo 7: error in Set>>#addIfNotPresent:ifPresentDo: | Priority-Medium bug | There is an DNU if you use this method, which sends `#asCollectionElement` instead of `#asSetElement`.
I'm loading Fuel from Voyage, which loads ImageWorker with:
~~~
imageWorker: spec
spec baseline: 'ImageWorker' with: [
spec
repository: 'github://pharo-contributions/ImageWorker/source' ]
~~~
which loads Fuel-Metalevel with:
~~~
fuelMetalevel: spec
spec baseline: 'Fuel' with: [
spec
repository: 'github://theseion/Fuel/repository';
loads: #( 'Fuel-Metalevel' ) ]
~~~ | 1.0 | Pharo 7: error in Set>>#addIfNotPresent:ifPresentDo: - There is an DNU if you use this method, which sends `#asCollectionElement` instead of `#asSetElement`.
I'm loading Fuel from Voyage, which loads ImageWorker with:
~~~
imageWorker: spec
spec baseline: 'ImageWorker' with: [
spec
repository: 'github://pharo-contributions/ImageWorker/source' ]
~~~
which loads Fuel-Metalevel with:
~~~
fuelMetalevel: spec
spec baseline: 'Fuel' with: [
spec
repository: 'github://theseion/Fuel/repository';
loads: #( 'Fuel-Metalevel' ) ]
~~~ | priority | pharo error in set addifnotpresent ifpresentdo there is an dnu if you use this method which sends ascollectionelement instead of assetelement i m loading fuel from voyage which loads imageworker with imageworker spec spec baseline imageworker with spec repository github pharo contributions imageworker source which loads fuel metalevel with fuelmetalevel spec spec baseline fuel with spec repository github theseion fuel repository loads fuel metalevel | 1 |
518,704 | 15,032,986,372 | IssuesEvent | 2021-02-02 10:53:03 | ivpn/ios-app | https://api.github.com/repos/ivpn/ios-app | opened | Network Protection - App gets stuck on connecting/disconnecting state | IKEv2 Network Protection OpenVPN priority: medium type: bug | ### Description
While testing network protection on version 2.1.0(27), it was observed that the app was getting stuck in connecting/disconnecting state when changing the default & current network trust status quickly, the issue not only happens with IKEv2 where the app throws endlessly the authentication error, but with OpenVPN as well where the app permanently tries to connect/disconnect.
**Note:**
See attached video for further details.
Please note that this issue is not observed with WireGuard.
### Actual result:
App gets stuck on connecting/disconnecting state when changing the default & current network trust status repeatedly.
### Expected result:
The app should never get stuck on connecting/disconnecting state when changing the trust status (even repeatedly).
### Environment
Device: iPhone XR
OS name and version: iOS 14.3
IVPN app version: Beta 2.1.0 (27)
### File

| 1.0 | Network Protection - App gets stuck on connecting/disconnecting state - ### Description
While testing network protection on version 2.1.0(27), it was observed that the app was getting stuck in connecting/disconnecting state when changing the default & current network trust status quickly, the issue not only happens with IKEv2 where the app throws endlessly the authentication error, but with OpenVPN as well where the app permanently tries to connect/disconnect.
**Note:**
See attached video for further details.
Please note that this issue is not observed with WireGuard.
### Actual result:
App gets stuck on connecting/disconnecting state when changing the default & current network trust status repeatedly.
### Expected result:
The app should never get stuck on connecting/disconnecting state when changing the trust status (even repeatedly).
### Environment
Device: iPhone XR
OS name and version: iOS 14.3
IVPN app version: Beta 2.1.0 (27)
### File

| priority | network protection app gets stuck on connecting disconnecting state description while testing network protection on version it was observed that the app was getting stuck in connecting disconnecting state when changing the default current network trust status quickly the issue not only happens with where the app throws endlessly the authentication error but with openvpn as well where the app permanently tries to connect disconnect note see attached video for further details please note that this issue is not observed with wireguard actual result app gets stuck on connecting disconnecting state when changing the default current network trust status repeatedly expected result the app should never get stuck on connecting disconnecting state when changing the trust status even repeatedly environment device iphone xr os name and version ios ivpn app version beta file | 1 |
616,636 | 19,308,395,619 | IssuesEvent | 2021-12-13 13:58:49 | BEXIS2/Core | https://api.github.com/repos/BEXIS2/Core | closed | View party: conditional attribute not shown | Priority: Medium Type: Bug bug | in profile we have

in manage party we have

| 1.0 | View party: conditional attribute not shown - in profile we have

in manage party we have

| priority | view party conditional attribute not shown in profile we have in manage party we have | 1 |
256,102 | 8,126,844,293 | IssuesEvent | 2018-08-17 05:05:04 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Add visit_utils module | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal |
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1033
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Add visit_utils module
Assigned to: Cyrus Harrison
Category:
Target version: 2.5
Author: Cyrus Harrison
Start: 04/25/2012
Due date:
% Done: 0
Estimated time:
Created: 04/25/2012 12:07 pm
Updated: 04/25/2012 12:40 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Comments:
Resolved w/ r17980
| 1.0 | Add visit_utils module -
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1033
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Add visit_utils module
Assigned to: Cyrus Harrison
Category:
Target version: 2.5
Author: Cyrus Harrison
Start: 04/25/2012
Due date:
% Done: 0
Estimated time:
Created: 04/25/2012 12:07 pm
Updated: 04/25/2012 12:40 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Comments:
Resolved w/ r17980
| priority | add visit utils module redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority normal subject add visit utils module assigned to cyrus harrison category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group any description comments resolved w | 1 |
36,196 | 2,796,970,497 | IssuesEvent | 2015-05-12 10:58:23 | CUL-DigitalServices/grasshopper-ui | https://api.github.com/repos/CUL-DigitalServices/grasshopper-ui | opened | Borrow series from another module-Information text on hovering mouse over borrowed series from another module doesnt show in firefox browser | Medium Priority | Borrow series from another module-Information text on hovering mouse over borrowed series from another module doesn’t show in Firefox browser.
Firefox version 37.0
OS Windows 7 Professional
I have attached screenshot from chrome showing the information and from Firefox where it doesn’t show


| 1.0 | Borrow series from another module-Information text on hovering mouse over borrowed series from another module doesnt show in firefox browser - Borrow series from another module-Information text on hovering mouse over borrowed series from another module doesn’t show in Firefox browser.
Firefox version 37.0
OS Windows 7 Professional
I have attached screenshot from chrome showing the information and from Firefox where it doesn’t show


| priority | borrow series from another module information text on hovering mouse over borrowed series from another module doesnt show in firefox browser borrow series from another module information text on hovering mouse over borrowed series from another module doesn’t show in firefox browser firefox version os windows professional i have attached screenshot from chrome showing the information and from firefox where it doesn’t show | 1 |
24,690 | 2,671,904,978 | IssuesEvent | 2015-03-24 10:38:46 | prikhi/pencil | https://api.github.com/repos/prikhi/pencil | closed | 1.3.2 is missing the toolbar shortcut for arrangement | 2–5 stars bug could not reproduce imported Priority-Medium | _From [adam.spi...@gmail.com](https://code.google.com/u/104792522496728613330/) on December 15, 2011 15:57:54_
Prior to version 1.3.2 there was a shortcut on the toolbar for moving an element's arrangement forwards and backwards. This shortcut has been removed in version 1.3.2 which makes it more difficult to make changes because now you have to constantly RIGHT CLICK on the element to change the setting.
Please add the toolbar shortcut back!!
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=387_ | 1.0 | 1.3.2 is missing the toolbar shortcut for arrangement - _From [adam.spi...@gmail.com](https://code.google.com/u/104792522496728613330/) on December 15, 2011 15:57:54_
Prior to version 1.3.2 there was a shortcut on the toolbar for moving an element's arrangement forwards and backwards. This shortcut has been removed in version 1.3.2 which makes it more difficult to make changes because now you have to constantly RIGHT CLICK on the element to change the setting.
Please add the toolbar shortcut back!!
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=387_ | priority | is missing the toolbar shortcut for arrangement from on december prior to version there was a shortcut on the toolbar for moving an element s arrangement forwards and backwards this shortcut has been removed in version which makes it more difficult to make changes because now you have to constantly right click on the element to change the setting please add the toolbar shortcut back original issue | 1 |
222,603 | 7,434,346,236 | IssuesEvent | 2018-03-26 10:41:38 | ilestis/miscellany | https://api.github.com/repos/ilestis/miscellany | closed | Keep tab id when refreshing | good first issue improvement medium priority | # Story
As a user, I want the current tab to be opened when I refresh the page
# Work
1. Adapt the tabs on views to add the tab anchor to the current page, so that if you view a character, hit the "relations" tab, and click refresh, the "relations" tab is still selected.
Might be worth writing a blade template or something for tabs, to avoid duplicating all the code each time.
# Test
* Clicking on a location, then on the "locations" tab, then refresh, the "locations" tab is automatically selected and opened.
* Same thing on other entities with tabs
* Works on mobile | 1.0 | Keep tab id when refreshing - # Story
As a user, I want the current tab to be opened when I refresh the page
# Work
1. Adapt the tabs on views to add the tab anchor to the current page, so that if you view a character, hit the "relations" tab, and click refresh, the "relations" tab is still selected.
Might be worth writing a blade template or something for tabs, to avoid duplicating all the code each time.
# Test
* Clicking on a location, then on the "locations" tab, then refresh, the "locations" tab is automatically selected and opened.
* Same thing on other entities with tabs
* Works on mobile | priority | keep tab id when refreshing story as a user i want the current tab to be opened when i refresh the page work adapt the tabs on views to add the tab anchor to the current page so that if you view a character hit the relations tab and click refresh the relations tab is still selected might be worth writing a blade template or something for tabs to avoid duplicating all the code each time test clicking on a location then on the locations tab then refresh the locations tab is automatically selected and opened same thing on other entities with tabs works on mobile | 1 |
584,454 | 17,440,707,223 | IssuesEvent | 2021-08-05 04:05:35 | pancakeswap/pancake-frontend | https://api.github.com/repos/pancakeswap/pancake-frontend | closed | [BUG] "{Amount} CAKE" & "through community auctions so far!" aren't using the correct text colors on dark mode. | Bug Medium Priority | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
<img width="1123" alt="Screenshot at Aug 05 10-17-19" src="https://user-images.githubusercontent.com/71833681/128276074-948ab005-314b-46b1-b609-f8b4578894ed.png">
### Expected Behavior
Text should be using the correct variant so it's visible while on dark mode.
### Steps To Reproduce
1. https://pancakeswap.finance/farms/auction
2. Turn on dark mode
### Environment
```markdown
Not relevant to this bug.
```
### Anything else?
_No response_ | 1.0 | [BUG] "{Amount} CAKE" & "through community auctions so far!" aren't using the correct text colors on dark mode. - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
<img width="1123" alt="Screenshot at Aug 05 10-17-19" src="https://user-images.githubusercontent.com/71833681/128276074-948ab005-314b-46b1-b609-f8b4578894ed.png">
### Expected Behavior
Text should be using the correct variant so it's visible while on dark mode.
### Steps To Reproduce
1. https://pancakeswap.finance/farms/auction
2. Turn on dark mode
### Environment
```markdown
Not relevant to this bug.
```
### Anything else?
_No response_ | priority | amount cake through community auctions so far aren t using the correct text colors on dark mode is there an existing issue for this i have searched the existing issues current behavior img width alt screenshot at aug src expected behavior text should be using the correct variant so it s visible while on dark mode steps to reproduce turn on dark mode environment markdown not relevant to this bug anything else no response | 1 |
25,763 | 2,683,972,133 | IssuesEvent | 2015-03-28 14:41:10 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | визуальные глюки при изменении размеров окна мышкой | 1 star bug imported Priority-Medium wontfix | _From [kerberos464@gmail.com](https://code.google.com/u/kerberos464@gmail.com/) on November 25, 2010 00:54:45_
ок, установил ConEmu , запустил, в настройках поменял только следующее:
размер шрифта 18, немигающий курсор, размер окна 150х52 (монитор 22"), автосворачивание в трей, Lazy tab switch = off, больше вроде ничего не менял.
теперь беру мышой окошко за правый нижний угол и уменьшаю, а потом увеличиваю окно, жму CtrlO (макрос для CtrlO из поставки конэму пока не ставил), вижу следующую картинку: http://i12.fastpic.ru/big/2010/1125/c8/baac308ef1880be4ff218bffe8257bc8.jpeg если потаскать окно за угол подольше, то "болезнь" прогрессирует :) http://i12.fastpic.ru/big/2010/1125/09/0c094e7b046f096c225eeb1e73db3d09.jpeg это из-за того, что макрос на CtrlO не установлен или просто не стоит так делать?
p.s. да, я извращенец =)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=323_ | 1.0 | визуальные глюки при изменении размеров окна мышкой - _From [kerberos464@gmail.com](https://code.google.com/u/kerberos464@gmail.com/) on November 25, 2010 00:54:45_
ок, установил ConEmu , запустил, в настройках поменял только следующее:
размер шрифта 18, немигающий курсор, размер окна 150х52 (монитор 22"), автосворачивание в трей, Lazy tab switch = off, больше вроде ничего не менял.
теперь беру мышой окошко за правый нижний угол и уменьшаю, а потом увеличиваю окно, жму CtrlO (макрос для CtrlO из поставки конэму пока не ставил), вижу следующую картинку: http://i12.fastpic.ru/big/2010/1125/c8/baac308ef1880be4ff218bffe8257bc8.jpeg если потаскать окно за угол подольше, то "болезнь" прогрессирует :) http://i12.fastpic.ru/big/2010/1125/09/0c094e7b046f096c225eeb1e73db3d09.jpeg это из-за того, что макрос на CtrlO не установлен или просто не стоит так делать?
p.s. да, я извращенец =)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=323_ | priority | визуальные глюки при изменении размеров окна мышкой from on november ок установил conemu запустил в настройках поменял только следующее размер шрифта немигающий курсор размер окна монитор автосворачивание в трей lazy tab switch off больше вроде ничего не менял теперь беру мышой окошко за правый нижний угол и уменьшаю а потом увеличиваю окно жму ctrlo макрос для ctrlo из поставки конэму пока не ставил вижу следующую картинку если потаскать окно за угол подольше то болезнь прогрессирует это из за того что макрос на ctrlo не установлен или просто не стоит так делать p s да я извращенец original issue | 1 |
106,817 | 4,286,297,138 | IssuesEvent | 2016-07-16 01:47:00 | munki/munki | https://api.github.com/repos/munki/munki | closed | com.googlecode.munki.munkiimport compatibility with Profile Manager | enhancement imported Priority-Medium | _From [boneyjel...@gmail.com](https://code.google.com/u/113163839790660303759/) on July 07, 2014 11:46:43_
I would like to have the capability to better manager munkiimport and manifestutil from Profile Manager or Workgroup Manager. Right now, Munki only checks ~/Library/Preferences/com.googlecode.munki.munkiimport.plist for configuration preferences rather than looking globally for the preference information.
Alternatively, please allow me to specify the repo path from the manifestutil command line. Right now, only munkiimport and makecatalogs allow the repo path to be set from the command line.
_Original issue: http://code.google.com/p/munki/issues/detail?id=347_ | 1.0 | com.googlecode.munki.munkiimport compatibility with Profile Manager - _From [boneyjel...@gmail.com](https://code.google.com/u/113163839790660303759/) on July 07, 2014 11:46:43_
I would like to have the capability to better manager munkiimport and manifestutil from Profile Manager or Workgroup Manager. Right now, Munki only checks ~/Library/Preferences/com.googlecode.munki.munkiimport.plist for configuration preferences rather than looking globally for the preference information.
Alternatively, please allow me to specify the repo path from the manifestutil command line. Right now, only munkiimport and makecatalogs allow the repo path to be set from the command line.
_Original issue: http://code.google.com/p/munki/issues/detail?id=347_ | priority | com googlecode munki munkiimport compatibility with profile manager from on july i would like to have the capability to better manager munkiimport and manifestutil from profile manager or workgroup manager right now munki only checks library preferences com googlecode munki munkiimport plist for configuration preferences rather than looking globally for the preference information alternatively please allow me to specify the repo path from the manifestutil command line right now only munkiimport and makecatalogs allow the repo path to be set from the command line original issue | 1 |
526,953 | 15,305,452,496 | IssuesEvent | 2021-02-24 18:08:14 | itslupus/gamersnet | https://api.github.com/repos/itslupus/gamersnet | closed | Backend authentication for account sign in | backend dev task medium priority | **Description**:
Create backend authentication endpoint
| 1.0 | Backend authentication for account sign in - **Description**:
Create backend authentication endpoint
| priority | backend authentication for account sign in description create backend authentication endpoint | 1 |
43,680 | 2,891,182,026 | IssuesEvent | 2015-06-15 01:27:00 | aseprite/aseprite | https://api.github.com/repos/aseprite/aseprite | closed | Layer Opacity | enhancement imported medium priority ui | _From [allegrot...@gmail.com](https://code.google.com/u/112244484696240855157/) on April 22, 2013 22:38:31_
What do you need to do? Make a layer semi-transparent How would you like to do it? I block animation in with rough shapes, and then want to clean it up. I'd like it if I can easily draw over layers by drawing on a new layer, and having the other layer go semi-transparent so that if I am using the same color I can easily see what I'm painting. Would be nice to have transparency sliders for layers.
_Original issue: http://code.google.com/p/aseprite/issues/detail?id=225_ | 1.0 | Layer Opacity - _From [allegrot...@gmail.com](https://code.google.com/u/112244484696240855157/) on April 22, 2013 22:38:31_
What do you need to do? Make a layer semi-transparent How would you like to do it? I block animation in with rough shapes, and then want to clean it up. I'd like it if I can easily draw over layers by drawing on a new layer, and having the other layer go semi-transparent so that if I am using the same color I can easily see what I'm painting. Would be nice to have transparency sliders for layers.
_Original issue: http://code.google.com/p/aseprite/issues/detail?id=225_ | priority | layer opacity from on april what do you need to do make a layer semi transparent how would you like to do it i block animation in with rough shapes and then want to clean it up i d like it if i can easily draw over layers by drawing on a new layer and having the other layer go semi transparent so that if i am using the same color i can easily see what i m painting would be nice to have transparency sliders for layers original issue | 1 |
389,096 | 11,497,342,131 | IssuesEvent | 2020-02-12 09:51:54 | AY1920S2-CS2103T-W13-4/main | https://api.github.com/repos/AY1920S2-CS2103T-W13-4/main | opened | viewLocation | priority.Medium type.Story | As a user I can can check the venue of the class, so that I can plan my traveling route during module planning. | 1.0 | viewLocation - As a user I can can check the venue of the class, so that I can plan my traveling route during module planning. | priority | viewlocation as a user i can can check the venue of the class so that i can plan my traveling route during module planning | 1 |
623,909 | 19,683,135,734 | IssuesEvent | 2022-01-11 18:52:07 | SAP/xsk | https://api.github.com/repos/SAP/xsk | closed | Logs directory not found | bug core priority-medium | **Describe the bug**
An exception is logged when navigating the web ide
> What version of the XSK are you using?
latest
**To Reproduce**
Steps to reproduce the behavior:
1. Start XSK
2. Go to the workbench perspective
3. See exception in logs
**Expected behavior**
No exception is logged
**Additional context**
`../logs` seems to be the value of `DIRIGIBLE_OPERATIONS_LOGS_ROOT_FOLDER_DEFAULT` variable https://github.com/eclipse/dirigible/blob/master/modules/services/service-operations/src/main/java/org/eclipse/dirigible/runtime/operations/processor/LogsProcessor.java#L37
Exception:
```
2022-01-11 09:35:32.009 [TRACE] [http-nio-8080-exec-7] app.http.rs.controller - Serving request for Resource[ide.css], Method[GET], Content-Type[[]], Accept[["text/css","*/*"]] finished
2022-01-11 09:35:32.134 [TRACE] [http-nio-8080-exec-3] app.http.rs.controller - Serving request for Resource[bootstrap.min.css], Method[GET], Content-Type[[]], Accept[["text/css","*/*"]] finished
2022-01-11 09:35:32.164 [TRACE] [http-nio-8080-exec-8] app.http.rs.controller - Serving request for Resource[bootstrap.min.css], Method[GET], Content-Type[[]], Accept[["text/css","*/*"]] finished
2022-01-11 09:35:32.741 [TRACE] [http-nio-8080-exec-4] app.http.rs.controller - Serving request for Resource[extensions], Method[GET], Content-Type[[]], Accept[["*/*"]] finished
2022-01-11 09:35:33.002 [ERROR] [http-nio-8080-exec-7] o.e.d.r.c.s.GeneralExceptionHandler - ../logs
java.nio.file.NoSuchFileException: ../logs
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) ~[na:na]
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[na:na]
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[na:na]
at java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432) ~[na:na]
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:472) ~[na:na]
at org.eclipse.dirigible.runtime.operations.processor.LogsProcessor.list(LogsProcessor.java:46) ~[dirigible-service-operations-6.1.14.jar:na]
at org.eclipse.dirigible.runtime.operations.service.LogsService.listLogs(LogsService.java:86) ~[dirigible-service-operations-6.1.14.jar:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) ~[cxf-rt-frontend-jaxrs-3.5.0.jar:3.5.0]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) ~[cxf-rt-frontend-jaxrs-3.5.0.jar:3.5.0]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:222) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) ~[servlet-api.jar:na]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:279) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) ~[tomcat-websocket.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.dirigible.runtime.core.filter.HealthCheckFilter.doFilter(HealthCheckFilter.java:57) ~[dirigible-service-core-6.1.14.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.dirigible.runtime.core.filter.HttpContextFilter.doFilter(HttpContextFilter.java:57) ~[dirigible-service-core-6.1.14.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at com.sap.xsk.xsodata.ds.filter.XSODataForwardFilter.doFilter(XSODataForwardFilter.java:50) ~[xsk-modules-engines-xsodata-0.13.0-SNAPSHOT.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.apache.catalina.filters.CorsFilter.handleNonCORS(CorsFilter.java:364) ~[catalina.jar:8.5.43]
at org.apache.catalina.filters.CorsFilter.doFilter(CorsFilter.java:170) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.jetty.servlets.DoSFilter.doFilterChain(DoSFilter.java:482) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.eclipse.jetty.servlets.DoSFilter.doFilter(DoSFilter.java:327) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.eclipse.jetty.servlets.DoSFilter.doFilter(DoSFilter.java:297) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.jetty.servlets.QoSFilter.doFilter(QoSFilter.java:203) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) ~[catalina.jar:8.5.43]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:610) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) ~[catalina.jar:8.5.43]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) ~[catalina.jar:8.5.43]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:660) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) ~[catalina.jar:8.5.43]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) ~[catalina.jar:8.5.43]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798) ~[tomcat-coyote.jar:8.5.43]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) ~[tomcat-coyote.jar:8.5.43]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808) ~[tomcat-coyote.jar:8.5.43]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498) ~[tomcat-coyote.jar:8.5.43]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-coyote.jar:8.5.43]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-util.jar:8.5.43]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
```
| 1.0 | Logs directory not found - **Describe the bug**
An exception is logged when navigating the web ide
> What version of the XSK are you using?
latest
**To Reproduce**
Steps to reproduce the behavior:
1. Start XSK
2. Go to the workbench perspective
3. See exception in logs
**Expected behavior**
No exception is logged
**Additional context**
`../logs` seems to be the value of `DIRIGIBLE_OPERATIONS_LOGS_ROOT_FOLDER_DEFAULT` variable https://github.com/eclipse/dirigible/blob/master/modules/services/service-operations/src/main/java/org/eclipse/dirigible/runtime/operations/processor/LogsProcessor.java#L37
Exception:
```
2022-01-11 09:35:32.009 [TRACE] [http-nio-8080-exec-7] app.http.rs.controller - Serving request for Resource[ide.css], Method[GET], Content-Type[[]], Accept[["text/css","*/*"]] finished
2022-01-11 09:35:32.134 [TRACE] [http-nio-8080-exec-3] app.http.rs.controller - Serving request for Resource[bootstrap.min.css], Method[GET], Content-Type[[]], Accept[["text/css","*/*"]] finished
2022-01-11 09:35:32.164 [TRACE] [http-nio-8080-exec-8] app.http.rs.controller - Serving request for Resource[bootstrap.min.css], Method[GET], Content-Type[[]], Accept[["text/css","*/*"]] finished
2022-01-11 09:35:32.741 [TRACE] [http-nio-8080-exec-4] app.http.rs.controller - Serving request for Resource[extensions], Method[GET], Content-Type[[]], Accept[["*/*"]] finished
2022-01-11 09:35:33.002 [ERROR] [http-nio-8080-exec-7] o.e.d.r.c.s.GeneralExceptionHandler - ../logs
java.nio.file.NoSuchFileException: ../logs
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) ~[na:na]
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[na:na]
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[na:na]
at java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432) ~[na:na]
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:472) ~[na:na]
at org.eclipse.dirigible.runtime.operations.processor.LogsProcessor.list(LogsProcessor.java:46) ~[dirigible-service-operations-6.1.14.jar:na]
at org.eclipse.dirigible.runtime.operations.service.LogsService.listLogs(LogsService.java:86) ~[dirigible-service-operations-6.1.14.jar:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) ~[cxf-rt-frontend-jaxrs-3.5.0.jar:3.5.0]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) ~[cxf-rt-frontend-jaxrs-3.5.0.jar:3.5.0]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) ~[cxf-core-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:222) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) ~[servlet-api.jar:na]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:279) ~[cxf-rt-transports-http-3.5.0.jar:3.5.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) ~[tomcat-websocket.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.dirigible.runtime.core.filter.HealthCheckFilter.doFilter(HealthCheckFilter.java:57) ~[dirigible-service-core-6.1.14.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.dirigible.runtime.core.filter.HttpContextFilter.doFilter(HttpContextFilter.java:57) ~[dirigible-service-core-6.1.14.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at com.sap.xsk.xsodata.ds.filter.XSODataForwardFilter.doFilter(XSODataForwardFilter.java:50) ~[xsk-modules-engines-xsodata-0.13.0-SNAPSHOT.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.apache.catalina.filters.CorsFilter.handleNonCORS(CorsFilter.java:364) ~[catalina.jar:8.5.43]
at org.apache.catalina.filters.CorsFilter.doFilter(CorsFilter.java:170) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.jetty.servlets.DoSFilter.doFilterChain(DoSFilter.java:482) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.eclipse.jetty.servlets.DoSFilter.doFilter(DoSFilter.java:327) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.eclipse.jetty.servlets.DoSFilter.doFilter(DoSFilter.java:297) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.eclipse.jetty.servlets.QoSFilter.doFilter(QoSFilter.java:203) ~[jetty-servlets-9.4.12.v20180830.jar:9.4.12.v20180830]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) ~[catalina.jar:8.5.43]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:610) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) ~[catalina.jar:8.5.43]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) ~[catalina.jar:8.5.43]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:660) ~[catalina.jar:8.5.43]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) ~[catalina.jar:8.5.43]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) ~[catalina.jar:8.5.43]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798) ~[tomcat-coyote.jar:8.5.43]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) ~[tomcat-coyote.jar:8.5.43]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808) ~[tomcat-coyote.jar:8.5.43]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498) ~[tomcat-coyote.jar:8.5.43]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-coyote.jar:8.5.43]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-util.jar:8.5.43]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
```
| priority | logs directory not found describe the bug an exception is logged when navigating the web ide what version of the xsk are you using latest to reproduce steps to reproduce the behavior start xsk go to the workbench perspective see exception in logs expected behavior no exception is logged additional context logs seems to be the value of dirigible operations logs root folder default variable exception app http rs controller serving request for resource method content type accept finished app http rs controller serving request for resource method content type accept finished app http rs controller serving request for resource method content type accept finished app http rs controller serving request for resource method content type accept finished o e d r c s generalexceptionhandler logs java nio file nosuchfileexception logs at java base sun nio fs unixexception translatetoioexception unixexception java at java base sun nio fs unixexception rethrowasioexception unixexception java at java base sun nio fs unixexception rethrowasioexception unixexception java at java base sun nio fs unixfilesystemprovider newdirectorystream unixfilesystemprovider java at java base java nio file files newdirectorystream files java at org eclipse dirigible runtime operations processor logsprocessor list logsprocessor java at org eclipse dirigible runtime operations service logsservice listlogs logsservice java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org apache cxf service invoker abstractinvoker performinvocation abstractinvoker java at org apache cxf service invoker abstractinvoker invoke abstractinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf interceptor serviceinvokerinterceptor run serviceinvokerinterceptor java at org apache cxf interceptor serviceinvokerinterceptor handlemessage serviceinvokerinterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java at org apache cxf transport servlet abstracthttpservlet doget abstracthttpservlet java at javax servlet http httpservlet service httpservlet java at org apache cxf transport servlet abstracthttpservlet service abstracthttpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org eclipse dirigible runtime core filter healthcheckfilter dofilter healthcheckfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org eclipse dirigible runtime core filter httpcontextfilter dofilter httpcontextfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at com sap xsk xsodata ds filter xsodataforwardfilter dofilter xsodataforwardfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters corsfilter handlenoncors corsfilter java at org apache catalina filters corsfilter dofilter corsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org eclipse jetty servlets dosfilter dofilterchain dosfilter java at org eclipse jetty servlets dosfilter dofilter dosfilter java at org eclipse jetty servlets dosfilter dofilter dosfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org eclipse jetty servlets qosfilter dofilter qosfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java base java lang thread run thread java | 1 |
806,496 | 29,830,825,299 | IssuesEvent | 2023-06-18 08:49:03 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | Fishing spell not working properly | Status: Confirmed Priority: Medium Profession: Fishing | **Description:**
When you are fishing after catching a fish, if you cast the spell too fast the fishing bobber bugs. The major problem is that if you cast again and start fishing it count the previous bugged bobber as your main.
**How to reproduce:**
Drag the fishing spell into action bar, start fishing, catch a fish and cast the spell fast again. | 1.0 | Fishing spell not working properly - **Description:**
When you are fishing after catching a fish, if you cast the spell too fast the fishing bobber bugs. The major problem is that if you cast again and start fishing it count the previous bugged bobber as your main.
**How to reproduce:**
Drag the fishing spell into action bar, start fishing, catch a fish and cast the spell fast again. | priority | fishing spell not working properly description when you are fishing after catching a fish if you cast the spell too fast the fishing bobber bugs the major problem is that if you cast again and start fishing it count the previous bugged bobber as your main how to reproduce drag the fishing spell into action bar start fishing catch a fish and cast the spell fast again | 1 |
40,759 | 2,868,940,459 | IssuesEvent | 2015-06-05 22:05:18 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Be agnostic about casing for ReadMe | enhancement Fixed Priority-Medium | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7955_
----
From Kevin Moore: Some of use have had readme.md for a long time. | 1.0 | Be agnostic about casing for ReadMe - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7955_
----
From Kevin Moore: Some of use have had readme.md for a long time. | priority | be agnostic about casing for readme issue by originally opened as dart lang sdk from kevin moore some of use have had readme md for a long time | 1 |
374,492 | 11,091,401,347 | IssuesEvent | 2019-12-15 12:09:15 | DigitalCampus/django-oppia | https://api.github.com/repos/DigitalCampus/django-oppia | closed | Duplicate code blocks in quiz management commands | medium priority refectoring/review | see: quiz/management/commands/check_duplicate_quizzes.py and quiz/management/commands/cleanup_quizzes.py
on SonarCloud: https://sonarcloud.io/component_measures?id=django_oppia&metric=new_duplicated_lines_density&selected=django_oppia%3Aquiz%2Fmanagement%2Fcommands%2Fcheck_duplicate_quizzes.py | 1.0 | Duplicate code blocks in quiz management commands - see: quiz/management/commands/check_duplicate_quizzes.py and quiz/management/commands/cleanup_quizzes.py
on SonarCloud: https://sonarcloud.io/component_measures?id=django_oppia&metric=new_duplicated_lines_density&selected=django_oppia%3Aquiz%2Fmanagement%2Fcommands%2Fcheck_duplicate_quizzes.py | priority | duplicate code blocks in quiz management commands see quiz management commands check duplicate quizzes py and quiz management commands cleanup quizzes py on sonarcloud | 1 |
377,817 | 11,184,823,995 | IssuesEvent | 2019-12-31 20:26:38 | dnnsoftware/Dnn.Platform | https://api.github.com/repos/dnnsoftware/Dnn.Platform | closed | Updating the UpdateService to Community Asset | Area: Platform > DotNetNuke.Web Effort: Low Priority: Medium Status: Review Type: Enhancement Type: Maintenance | The Community now has its own copy of the Update Service, which provides update notifications for DNN Platform as well as for third-party modules. This service should be updated for both new installations as well as existing installations to point to the new location. | 1.0 | Updating the UpdateService to Community Asset - The Community now has its own copy of the Update Service, which provides update notifications for DNN Platform as well as for third-party modules. This service should be updated for both new installations as well as existing installations to point to the new location. | priority | updating the updateservice to community asset the community now has its own copy of the update service which provides update notifications for dnn platform as well as for third party modules this service should be updated for both new installations as well as existing installations to point to the new location | 1 |
308,867 | 9,458,571,886 | IssuesEvent | 2019-04-17 05:53:33 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | Add event triggering for missing identity operations | Complexity/Medium Priority/High Severity/Major Type/Improvement WUM | Currently we only fire "account lock" event only. Which is a generic event where it can be fired for multiple operations. We need to trigger a specific event here. Furthermore, there are multiple operations that we do not trigger events. We need to identify them and trigger events accordingly. | 1.0 | Add event triggering for missing identity operations - Currently we only fire "account lock" event only. Which is a generic event where it can be fired for multiple operations. We need to trigger a specific event here. Furthermore, there are multiple operations that we do not trigger events. We need to identify them and trigger events accordingly. | priority | add event triggering for missing identity operations currently we only fire account lock event only which is a generic event where it can be fired for multiple operations we need to trigger a specific event here furthermore there are multiple operations that we do not trigger events we need to identify them and trigger events accordingly | 1 |
736,101 | 25,458,404,407 | IssuesEvent | 2022-11-24 16:05:24 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Renaming a colocated table renames the primary table in the superblock | kind/bug area/docdb priority/medium | Jira Link: [DB-4316](https://yugabyte.atlassian.net/browse/DB-4316)
### Description
On renaming a colocated table, the parent table in the superblock is renamed instead of the intended table. It happens because the default table id (primary table id) is being used to set the new name.
```
metadata_->SetSchema(*operation->schema(), operation->index_map(), deleted_cols,
operation->schema_version(), current_table_info->table_id); <- passing the table id (correct)
if (operation->has_new_table_name()) {
metadata_->SetTableName(
current_table_info->namespace_name, operation->new_table_name().ToBuffer()); <- NOT passing the table id (incorrect)
if (table_metrics_entity_) {
table_metrics_entity_->SetAttribute("table_name", operation->new_table_name().ToBuffer());
table_metrics_entity_->SetAttribute("namespace_name", current_table_info->namespace_name);
}
if (tablet_metrics_entity_) {
tablet_metrics_entity_->SetAttribute("table_name", operation->new_table_name().ToBuffer());
tablet_metrics_entity_->SetAttribute("namespace_name", current_table_info->namespace_name);
}
}
```
Tables in superblock before the alter:
```
table_id: "000033e8000030008000000000004000.tablegroup.parent.uuid"
table_name: "000033e8000030008000000000004000.tablegroup.parent.tablename"
table_id: "000033e8000030008000000000004001"
table_name: "t1"
```
After running alter table t1 rename to new_name;
```
table_id: "000033e8000030008000000000004000.tablegroup.parent.uuid"
table_name: "new_name"
table_id: "000033e8000030008000000000004001"
table_name: "t1"
``` | 1.0 | [DocDB] Renaming a colocated table renames the primary table in the superblock - Jira Link: [DB-4316](https://yugabyte.atlassian.net/browse/DB-4316)
### Description
On renaming a colocated table, the parent table in the superblock is renamed instead of the intended table. It happens because the default table id (primary table id) is being used to set the new name.
```
metadata_->SetSchema(*operation->schema(), operation->index_map(), deleted_cols,
operation->schema_version(), current_table_info->table_id); <- passing the table id (correct)
if (operation->has_new_table_name()) {
metadata_->SetTableName(
current_table_info->namespace_name, operation->new_table_name().ToBuffer()); <- NOT passing the table id (incorrect)
if (table_metrics_entity_) {
table_metrics_entity_->SetAttribute("table_name", operation->new_table_name().ToBuffer());
table_metrics_entity_->SetAttribute("namespace_name", current_table_info->namespace_name);
}
if (tablet_metrics_entity_) {
tablet_metrics_entity_->SetAttribute("table_name", operation->new_table_name().ToBuffer());
tablet_metrics_entity_->SetAttribute("namespace_name", current_table_info->namespace_name);
}
}
```
Tables in superblock before the alter:
```
table_id: "000033e8000030008000000000004000.tablegroup.parent.uuid"
table_name: "000033e8000030008000000000004000.tablegroup.parent.tablename"
table_id: "000033e8000030008000000000004001"
table_name: "t1"
```
After running alter table t1 rename to new_name;
```
table_id: "000033e8000030008000000000004000.tablegroup.parent.uuid"
table_name: "new_name"
table_id: "000033e8000030008000000000004001"
table_name: "t1"
``` | priority | renaming a colocated table renames the primary table in the superblock jira link description on renaming a colocated table the parent table in the superblock is renamed instead of the intended table it happens because the default table id primary table id is being used to set the new name metadata setschema operation schema operation index map deleted cols operation schema version current table info table id passing the table id correct if operation has new table name metadata settablename current table info namespace name operation new table name tobuffer not passing the table id incorrect if table metrics entity table metrics entity setattribute table name operation new table name tobuffer table metrics entity setattribute namespace name current table info namespace name if tablet metrics entity tablet metrics entity setattribute table name operation new table name tobuffer tablet metrics entity setattribute namespace name current table info namespace name tables in superblock before the alter table id tablegroup parent uuid table name tablegroup parent tablename table id table name after running alter table rename to new name table id tablegroup parent uuid table name new name table id table name | 1 |
560,947 | 16,607,212,371 | IssuesEvent | 2021-06-02 06:22:53 | Alluxio/alluxio | https://api.github.com/repos/Alluxio/alluxio | closed | Attachdb action report error when using Alluxio SDS | area-structured-data priority-medium type-bug | **Alluxio Version:**
2.5.0-3
**Describe the bug**
Sync errors:
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.io.IOException: Failed to get table: xxx error: [OMS Internal Error]xxx partition p1 has more than one values. Currently do not support this kind of partition
Table test failed to sync: java.io.IOException: Failed to mount table location. tableName: test hiveUfsLocation: hdfs://xxx/user/xxx/warehouse/test.db/xxx AlluxioLocation: /catalog/test/tables/xxx/hive error: Ufs path hdfs://xxx/user/xxx/warehouse/test.db/xxx does not exist
Table xxx failed to sync: java.io.IOException: Failed to mount table location. tableName: xxx hiveUfsLocation: hdfs://xxx/user/xxx/warehouse/test.db/xxx AlluxioLocation: /catalog/test/tables/xxx/hive error: Ufs path hdfs://xxx/user/xxx/warehouse/test.db/xxx does not exist
Table test failed to sync: java.io.IOException: Failed to get table: test error: [OMS Internal Error]test partition age has more than one values. Currently do not support this kind of partition
Table test failed to sync: java.io.IOException: Failed to get table: test error: [OMS Internal Error]test partition sub_1 has more than one values. Currently do not support this kind of partition
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.io.IOException: Failed to get table: test error: [OMS Internal Error]test partition sub_par1 has more than one values. Currently do not support this kind of partition
**To Reproduce**
> bin/alluxio table attachdb -o udb-hive.mount.option.{xxx}.alluxio.underfs.version=xxx -o udb-hive.mount.option.{hdfs://xxx}.alluxio.underfs.version=xxx -o udb-hive.mount.option.{hdfs://xxx}.alluxio.underfs.version=xxx hive thrift://xxx:xxx test
**Expected behavior**
attachdb operation succeesfull
**Urgency**
If we do not give the "--ignore-sync-errors" option the attachdb action will failed
**Additional context**
Add any other context about the problem here.
| 1.0 | Attachdb action report error when using Alluxio SDS - **Alluxio Version:**
2.5.0-3
**Describe the bug**
Sync errors:
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.io.IOException: Failed to get table: xxx error: [OMS Internal Error]xxx partition p1 has more than one values. Currently do not support this kind of partition
Table test failed to sync: java.io.IOException: Failed to mount table location. tableName: test hiveUfsLocation: hdfs://xxx/user/xxx/warehouse/test.db/xxx AlluxioLocation: /catalog/test/tables/xxx/hive error: Ufs path hdfs://xxx/user/xxx/warehouse/test.db/xxx does not exist
Table xxx failed to sync: java.io.IOException: Failed to mount table location. tableName: xxx hiveUfsLocation: hdfs://xxx/user/xxx/warehouse/test.db/xxx AlluxioLocation: /catalog/test/tables/xxx/hive error: Ufs path hdfs://xxx/user/xxx/warehouse/test.db/xxx does not exist
Table test failed to sync: java.io.IOException: Failed to get table: test error: [OMS Internal Error]test partition age has more than one values. Currently do not support this kind of partition
Table test failed to sync: java.io.IOException: Failed to get table: test error: [OMS Internal Error]test partition sub_1 has more than one values. Currently do not support this kind of partition
Table test failed to sync: java.lang.IllegalArgumentException: Can not create a uri with a null path.
Table test failed to sync: java.io.IOException: Failed to get table: test error: [OMS Internal Error]test partition sub_par1 has more than one values. Currently do not support this kind of partition
**To Reproduce**
> bin/alluxio table attachdb -o udb-hive.mount.option.{xxx}.alluxio.underfs.version=xxx -o udb-hive.mount.option.{hdfs://xxx}.alluxio.underfs.version=xxx -o udb-hive.mount.option.{hdfs://xxx}.alluxio.underfs.version=xxx hive thrift://xxx:xxx test
**Expected behavior**
attachdb operation succeesfull
**Urgency**
If we do not give the "--ignore-sync-errors" option the attachdb action will failed
**Additional context**
Add any other context about the problem here.
| priority | attachdb action report error when using alluxio sds alluxio version describe the bug sync errors table test failed to sync java lang illegalargumentexception can not create a uri with a null path table test failed to sync java lang illegalargumentexception can not create a uri with a null path table test failed to sync java lang illegalargumentexception can not create a uri with a null path table test failed to sync java io ioexception failed to get table xxx error xxx partition has more than one values currently do not support this kind of partition table test failed to sync java io ioexception failed to mount table location tablename test hiveufslocation hdfs xxx user xxx warehouse test db xxx alluxiolocation catalog test tables xxx hive error ufs path hdfs xxx user xxx warehouse test db xxx does not exist table xxx failed to sync java io ioexception failed to mount table location tablename xxx hiveufslocation hdfs xxx user xxx warehouse test db xxx alluxiolocation catalog test tables xxx hive error ufs path hdfs xxx user xxx warehouse test db xxx does not exist table test failed to sync java io ioexception failed to get table test error test partition age has more than one values currently do not support this kind of partition table test failed to sync java io ioexception failed to get table test error test partition sub has more than one values currently do not support this kind of partition table test failed to sync java lang illegalargumentexception can not create a uri with a null path table test failed to sync java io ioexception failed to get table test error test partition sub has more than one values currently do not support this kind of partition to reproduce bin alluxio table attachdb o udb hive mount option xxx alluxio underfs version xxx o udb hive mount option hdfs xxx alluxio underfs version xxx o udb hive mount option hdfs xxx alluxio underfs version xxx hive thrift xxx xxx test expected behavior attachdb operation succeesfull urgency if we do not give the ignore sync errors option the attachdb action will failed additional context add any other context about the problem here | 1 |
292,713 | 8,966,782,262 | IssuesEvent | 2019-01-29 00:21:26 | codephil-columbia/typephil | https://api.github.com/repos/codephil-columbia/typephil | opened | Add scrolling feature for multi-line tutorial texts | Medium Priority | After you type and complete an exercise, in the review page you only see the last line and cannot see or scroll to previous lines. So user can't understand what they did right or wrong.
**Current. Note that you cannot see the first 2-3 lines. Just the end is shown. By default, we should start by showing the first few lines instead of the last line**

**Also change UI to make it more like this **

| 1.0 | Add scrolling feature for multi-line tutorial texts - After you type and complete an exercise, in the review page you only see the last line and cannot see or scroll to previous lines. So user can't understand what they did right or wrong.
**Current. Note that you cannot see the first 2-3 lines. Just the end is shown. By default, we should start by showing the first few lines instead of the last line**

**Also change UI to make it more like this **

| priority | add scrolling feature for multi line tutorial texts after you type and complete an exercise in the review page you only see the last line and cannot see or scroll to previous lines so user can t understand what they did right or wrong current note that you cannot see the first lines just the end is shown by default we should start by showing the first few lines instead of the last line also change ui to make it more like this | 1 |
813,667 | 30,466,279,260 | IssuesEvent | 2023-07-17 10:35:11 | nestjs/nest | https://api.github.com/repos/nestjs/nest | closed | Lazy loaded module can't access global module's providers | type: bug :sob: scope: core effort1: hours priority: medium (3) | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
In the documentation it says:
> Also, "lazy-loaded" modules share the same modules graph as those eagerly loaded on the application bootstrap as well as any other lazy modules registered later in your app.
But when trying to import a provider that is exported from a globally registered module (which eager modules are able to receive injected into their providers), lazy modules produce an error saying that the provider is not found.
### Minimum reproduction code
https://stackblitz.com/edit/nestjs-typescript-starter-muxzyg?file=src/app.module.ts
### Steps to reproduce
_No response_
### Expected behavior
I expected the service of the lazy module to be able to import a service from a global module.
### Package
- [ ] I don't know. Or some 3rd-party package
- [X] <code>@nestjs/common</code>
- [X] <code>@nestjs/core</code>
- [ ] <code>@nestjs/microservices</code>
- [ ] <code>@nestjs/platform-express</code>
- [ ] <code>@nestjs/platform-fastify</code>
- [ ] <code>@nestjs/platform-socket.io</code>
- [ ] <code>@nestjs/platform-ws</code>
- [ ] <code>@nestjs/testing</code>
- [ ] <code>@nestjs/websockets</code>
- [ ] Other (see below)
### Other package
_No response_
### NestJS version
_No response_
### Packages versions
```json
{
"@nestjs/common": "^8.1.1",
"@nestjs/core": "^8.1.1",
}
```
### Node.js version
_No response_
### In which operating systems have you tested?
- [X] macOS
- [ ] Windows
- [X] Linux
### Other
_No response_ | 1.0 | Lazy loaded module can't access global module's providers - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
In the documentation it says:
> Also, "lazy-loaded" modules share the same modules graph as those eagerly loaded on the application bootstrap as well as any other lazy modules registered later in your app.
But when trying to import a provider that is exported from a globally registered module (which eager modules are able to receive injected into their providers), lazy modules produce an error saying that the provider is not found.
### Minimum reproduction code
https://stackblitz.com/edit/nestjs-typescript-starter-muxzyg?file=src/app.module.ts
### Steps to reproduce
_No response_
### Expected behavior
I expected the service of the lazy module to be able to import a service from a global module.
### Package
- [ ] I don't know. Or some 3rd-party package
- [X] <code>@nestjs/common</code>
- [X] <code>@nestjs/core</code>
- [ ] <code>@nestjs/microservices</code>
- [ ] <code>@nestjs/platform-express</code>
- [ ] <code>@nestjs/platform-fastify</code>
- [ ] <code>@nestjs/platform-socket.io</code>
- [ ] <code>@nestjs/platform-ws</code>
- [ ] <code>@nestjs/testing</code>
- [ ] <code>@nestjs/websockets</code>
- [ ] Other (see below)
### Other package
_No response_
### NestJS version
_No response_
### Packages versions
```json
{
"@nestjs/common": "^8.1.1",
"@nestjs/core": "^8.1.1",
}
```
### Node.js version
_No response_
### In which operating systems have you tested?
- [X] macOS
- [ ] Windows
- [X] Linux
### Other
_No response_ | priority | lazy loaded module can t access global module s providers is there an existing issue for this i have searched the existing issues current behavior in the documentation it says also lazy loaded modules share the same modules graph as those eagerly loaded on the application bootstrap as well as any other lazy modules registered later in your app but when trying to import a provider that is exported from a globally registered module which eager modules are able to receive injected into their providers lazy modules produce an error saying that the provider is not found minimum reproduction code steps to reproduce no response expected behavior i expected the service of the lazy module to be able to import a service from a global module package i don t know or some party package nestjs common nestjs core nestjs microservices nestjs platform express nestjs platform fastify nestjs platform socket io nestjs platform ws nestjs testing nestjs websockets other see below other package no response nestjs version no response packages versions json nestjs common nestjs core node js version no response in which operating systems have you tested macos windows linux other no response | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.