Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
855
labels
stringlengths
4
721
body
stringlengths
1
261k
index
stringclasses
13 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
240k
binary_label
int64
0
1
164,189
6,220,846,767
IssuesEvent
2017-07-10 02:09:25
nylas-mail-lives/nylas-mail
https://api.github.com/repos/nylas-mail-lives/nylas-mail
closed
Fix CI for Linux
Priority: High Type: Maintenance
Currently the Linux build is failing due to issue #8. Once that issue is resolved, the Linux "build", actually the deploy, will fail due to missing AWS keys. We need to figure out where we want to store our builds and how we want to push updates to end users. [Edit] It appears that we will disable the "upload" functionality.
1.0
Fix CI for Linux - Currently the Linux build is failing due to issue #8. Once that issue is resolved, the Linux "build", actually the deploy, will fail due to missing AWS keys. We need to figure out where we want to store our builds and how we want to push updates to end users. [Edit] It appears that we will disable the "upload" functionality.
priority
fix ci for linux currently the linux build is failing due to issue once that issue is resolved the linux build actually the deploy will fail due to missing aws keys we need to figure out where we want to store our builds and how we want to push updates to end users it appears that we will disable the upload functionality
1
233,272
7,695,891,576
IssuesEvent
2018-05-18 13:47:00
marklogic-community/data-explorer
https://api.github.com/repos/marklogic-community/data-explorer
closed
FERR-49 - Automatic version update
Component - UI JIRA Migration Priority - High Status - Complete Type - Task
**Original Reporter:** @markschiffner **Created:** 25/Aug/17 8:05 AM # Description Need automatic revision number update based on checkin. At minimum, we should have a file with the revision that we update and tag the baseline. Currently UI says v0.1.0.
1.0
FERR-49 - Automatic version update - **Original Reporter:** @markschiffner **Created:** 25/Aug/17 8:05 AM # Description Need automatic revision number update based on checkin. At minimum, we should have a file with the revision that we update and tag the baseline. Currently UI says v0.1.0.
priority
ferr automatic version update original reporter markschiffner created aug am description need automatic revision number update based on checkin at minimum we should have a file with the revision that we update and tag the baseline currently ui says
1
793,045
27,981,551,903
IssuesEvent
2023-03-26 07:49:31
devvsakib/power-the-web
https://api.github.com/repos/devvsakib/power-the-web
closed
[BUG]: footer overlap in project page
bug [priority: high]
**Describe the bug** Footer is overlapping on the project page. ![image](https://user-images.githubusercontent.com/88339569/226095170-adcda7cb-d426-48e6-849f-dab8330a135c.png)s. **To Reproduce** Steps to reproduce the behavior: Check height, overflow **Expected behavior** Place in the bottom **Desktop (please complete the following information):** - OS: Windows - Browser chrome, firefox - Version 22 **Additional context** Add any other context about the problem here. ## Checklist - [x] I have read and followed the project's contributing guidelines - [x] I have checked that this issue is not a duplicate of an existing issue - [x] I have tested the bug in the latest version of the project - [x] I have included all relevant information and examples in this issue description - [x] I am available to collaborate on this issue and address any feedback from maintainers or other contributors <!-- Do not delete this --> **IMPORTANT** If it's your first Contribution to this project, please add your details to the Contributors file File Path: `src/json/Contributors.json` and you have to **MUST** read the contributing guideline before you start. [Read this Guideline Before you start](https://github.com/devvsakib/power-the-web/blob/main/CONTRIBUTING.md) > If you do not add your details or not assigned to issue, your PR will be Close
1.0
[BUG]: footer overlap in project page - **Describe the bug** Footer is overlapping on the project page. ![image](https://user-images.githubusercontent.com/88339569/226095170-adcda7cb-d426-48e6-849f-dab8330a135c.png)s. **To Reproduce** Steps to reproduce the behavior: Check height, overflow **Expected behavior** Place in the bottom **Desktop (please complete the following information):** - OS: Windows - Browser chrome, firefox - Version 22 **Additional context** Add any other context about the problem here. ## Checklist - [x] I have read and followed the project's contributing guidelines - [x] I have checked that this issue is not a duplicate of an existing issue - [x] I have tested the bug in the latest version of the project - [x] I have included all relevant information and examples in this issue description - [x] I am available to collaborate on this issue and address any feedback from maintainers or other contributors <!-- Do not delete this --> **IMPORTANT** If it's your first Contribution to this project, please add your details to the Contributors file File Path: `src/json/Contributors.json` and you have to **MUST** read the contributing guideline before you start. [Read this Guideline Before you start](https://github.com/devvsakib/power-the-web/blob/main/CONTRIBUTING.md) > If you do not add your details or not assigned to issue, your PR will be Close
priority
footer overlap in project page describe the bug footer is overlapping on the project page to reproduce steps to reproduce the behavior check height overflow expected behavior place in the bottom desktop please complete the following information os windows browser chrome firefox version additional context add any other context about the problem here checklist i have read and followed the project s contributing guidelines i have checked that this issue is not a duplicate of an existing issue i have tested the bug in the latest version of the project i have included all relevant information and examples in this issue description i am available to collaborate on this issue and address any feedback from maintainers or other contributors important if it s your first contribution to this project please add your details to the contributors file file path src json contributors json and you have to must read the contributing guideline before you start if you do not add your details or not assigned to issue your pr will be close
1
781,554
27,441,742,791
IssuesEvent
2023-03-02 11:30:46
RobotLocomotion/drake
https://api.github.com/repos/RobotLocomotion/drake
reopened
Tutorial rendering_multibody_plant broken on Deepnote
type: bug priority: high component: tutorials
Using the updated versions of #18000, the [rendering_multibody_plant.ipynb](https://deepnote.com/workspace/Drake-0b3b2c53-a7ad-441b-80f8-bf8350752305/project/Tutorials-2b4fc509-aef2-417d-a40d-6071dfed9199/%2Frendering_multibody_plant.ipynb) on Deepnote crashes partway through.
1.0
Tutorial rendering_multibody_plant broken on Deepnote - Using the updated versions of #18000, the [rendering_multibody_plant.ipynb](https://deepnote.com/workspace/Drake-0b3b2c53-a7ad-441b-80f8-bf8350752305/project/Tutorials-2b4fc509-aef2-417d-a40d-6071dfed9199/%2Frendering_multibody_plant.ipynb) on Deepnote crashes partway through.
priority
tutorial rendering multibody plant broken on deepnote using the updated versions of the on deepnote crashes partway through
1
501,730
14,532,412,917
IssuesEvent
2020-12-14 22:23:45
localstack/localstack
https://api.github.com/repos/localstack/localstack
closed
CloudFormation Ref of "AWS::ApiGatewayV2::Api" attempts to return "PhysicalResourceId" which is null (instead of "ApiId")
cannot-reproduce priority-high should-be-fixed
<!-- Love localstack? Please consider supporting our collective: :point_right: https://opencollective.com/localstack/donate --> # Type of request: This is a ... [ X ] bug report [ ] feature request # Detailed description I have an AWS::ApiGatewayV2::Api resource in my CloudFormation template. Using "Ref" on this resource, should return the ApiId (for use across other resources in the stack). Instead, the following error is observed: ```localstack_main | 2020-11-17T06:34:32:WARNING:localstack.utils.cloudformation.template_deployer: Unable to extract reference attribute PhysicalResourceId from resource: {'ApiEndpoint': 'ws://localhost:4514', 'ApiId': '48a6e425', 'Description': 'Serverless Websockets', 'Name': 'local-localstack-websockets-websockets', 'ProtocolType': 'WEBSOCKET', 'RouteSelectionExpression': '$request.body.action'}``` Note that this was working fine a few days ago. Related to this? https://github.com/localstack/localstack/pull/3244 ## Expected behavior Ref on AWS::ApiGatewayV2::Api should return the ApiId. ## Actual behavior An exception is thrown, as it attempts to get PhysicalResourceId, which either does not exist or simply isn't populated. # Steps to reproduce Please refer to the steps to reproduce contained within this comment / parent issue, as it is able to replicate the issue. https://github.com/localstack/localstack/issues/3210#issuecomment-723768889 ┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-329) by [Unito](https://www.unito.io/learn-more)
1.0
CloudFormation Ref of "AWS::ApiGatewayV2::Api" attempts to return "PhysicalResourceId" which is null (instead of "ApiId") - <!-- Love localstack? Please consider supporting our collective: :point_right: https://opencollective.com/localstack/donate --> # Type of request: This is a ... [ X ] bug report [ ] feature request # Detailed description I have an AWS::ApiGatewayV2::Api resource in my CloudFormation template. Using "Ref" on this resource, should return the ApiId (for use across other resources in the stack). Instead, the following error is observed: ```localstack_main | 2020-11-17T06:34:32:WARNING:localstack.utils.cloudformation.template_deployer: Unable to extract reference attribute PhysicalResourceId from resource: {'ApiEndpoint': 'ws://localhost:4514', 'ApiId': '48a6e425', 'Description': 'Serverless Websockets', 'Name': 'local-localstack-websockets-websockets', 'ProtocolType': 'WEBSOCKET', 'RouteSelectionExpression': '$request.body.action'}``` Note that this was working fine a few days ago. Related to this? https://github.com/localstack/localstack/pull/3244 ## Expected behavior Ref on AWS::ApiGatewayV2::Api should return the ApiId. ## Actual behavior An exception is thrown, as it attempts to get PhysicalResourceId, which either does not exist or simply isn't populated. # Steps to reproduce Please refer to the steps to reproduce contained within this comment / parent issue, as it is able to replicate the issue. https://github.com/localstack/localstack/issues/3210#issuecomment-723768889 ┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-329) by [Unito](https://www.unito.io/learn-more)
priority
cloudformation ref of aws api attempts to return physicalresourceid which is null instead of apiid love localstack please consider supporting our collective point right type of request this is a bug report feature request detailed description i have an aws api resource in my cloudformation template using ref on this resource should return the apiid for use across other resources in the stack instead the following error is observed localstack main warning localstack utils cloudformation template deployer unable to extract reference attribute physicalresourceid from resource apiendpoint ws localhost apiid description serverless websockets name local localstack websockets websockets protocoltype websocket routeselectionexpression request body action note that this was working fine a few days ago related to this expected behavior ref on aws api should return the apiid actual behavior an exception is thrown as it attempts to get physicalresourceid which either does not exist or simply isn t populated steps to reproduce please refer to the steps to reproduce contained within this comment parent issue as it is able to replicate the issue ┆issue is synchronized with this by
1
656,069
21,718,196,698
IssuesEvent
2022-05-10 20:14:24
CCSI-Toolset/FOQUS
https://api.github.com/repos/CCSI-Toolset/FOQUS
closed
Revisit PyQt version(s) requirements
Priority:High
## Motivation - Currently, we're using `pyqt==5.13` in `setup.py` - However, this has shortcomings - AFAIK 5.13 is not a "super-stable"/LTS (long-term support) version for (Py)Qt 5 such as 5.12 and 5.15 - To be able to have a Conda package for FOQUS, it has to be compatible with the available versions ## Known constraints - `pytest-qt` only works with Qt 5.11+ - AFAIK, PyQt 5.12.3 is the latest version available to install with Conda (through the `conda-forge` channel) - PyQt 5.13 is not available for Windows and Python 3.9, leading to installation errors such as `ERROR: Could not find a version that satisfies the requirement PyQt5==5.13 (from ccsi-foqus) (from versions: 5.12.3, 5.14.0, 5.14.1, 5.14.2, 5.15.0, 5.15.1, 5.15.2, 5.15.3, 5.15.4, 5.15.5, 5.15.6)`
1.0
Revisit PyQt version(s) requirements - ## Motivation - Currently, we're using `pyqt==5.13` in `setup.py` - However, this has shortcomings - AFAIK 5.13 is not a "super-stable"/LTS (long-term support) version for (Py)Qt 5 such as 5.12 and 5.15 - To be able to have a Conda package for FOQUS, it has to be compatible with the available versions ## Known constraints - `pytest-qt` only works with Qt 5.11+ - AFAIK, PyQt 5.12.3 is the latest version available to install with Conda (through the `conda-forge` channel) - PyQt 5.13 is not available for Windows and Python 3.9, leading to installation errors such as `ERROR: Could not find a version that satisfies the requirement PyQt5==5.13 (from ccsi-foqus) (from versions: 5.12.3, 5.14.0, 5.14.1, 5.14.2, 5.15.0, 5.15.1, 5.15.2, 5.15.3, 5.15.4, 5.15.5, 5.15.6)`
priority
revisit pyqt version s requirements motivation currently we re using pyqt in setup py however this has shortcomings afaik is not a super stable lts long term support version for py qt such as and to be able to have a conda package for foqus it has to be compatible with the available versions known constraints pytest qt only works with qt afaik pyqt is the latest version available to install with conda through the conda forge channel pyqt is not available for windows and python leading to installation errors such as error could not find a version that satisfies the requirement from ccsi foqus from versions
1
481,788
13,891,889,024
IssuesEvent
2020-10-19 11:22:33
sunpy/sunpy
https://api.github.com/repos/sunpy/sunpy
closed
Maps using the CD matrix are not correctly modified by resample
Bug(?) Close? Effort High Package Intermediate Priority Medium map
resample only changes the CDELT flags not CD if present.
1.0
Maps using the CD matrix are not correctly modified by resample - resample only changes the CDELT flags not CD if present.
priority
maps using the cd matrix are not correctly modified by resample resample only changes the cdelt flags not cd if present
1
79,224
3,521,991,365
IssuesEvent
2016-01-13 06:40:50
paulkass/BirdBoxBuilder
https://api.github.com/repos/paulkass/BirdBoxBuilder
closed
Add a plank with a hole geometry
Important new feature Priority: High
Add a plank with a hole geometry for the front edge of the bird box.
1.0
Add a plank with a hole geometry - Add a plank with a hole geometry for the front edge of the bird box.
priority
add a plank with a hole geometry add a plank with a hole geometry for the front edge of the bird box
1
87,589
3,755,991,559
IssuesEvent
2016-03-13 01:36:32
devspace/devspace
https://api.github.com/repos/devspace/devspace
closed
Optimize mobile experience
bug high priority
Added a few columns but couldn't see them on iPhone or iPad since the app wouldn't scroll horizontally
1.0
Optimize mobile experience - Added a few columns but couldn't see them on iPhone or iPad since the app wouldn't scroll horizontally
priority
optimize mobile experience added a few columns but couldn t see them on iphone or ipad since the app wouldn t scroll horizontally
1
252,702
8,039,294,016
IssuesEvent
2018-07-30 17:55:36
systers/communities
https://api.github.com/repos/systers/communities
closed
Set up the repository with basic angular files
Category: Coding Difficulty: MEDIUM Priority: HIGH Program: GSoC Type: Enhancement
## Description As a user, I need set up the repository, so that I can restart the project with angular framework ## Acceptance Criteria - Basic working Angular App - Basic files and modules established ### Update [Required] - README - Create multiple new files ## Definition of Done - [ ] All of the required items are completed. - [ ] Approval by 1 mentor. ## Estimation 1 hour Can I be assigned this issue @Tharangi @divyanshu-rawat @Janiceilene @MeepyMay ?
1.0
Set up the repository with basic angular files - ## Description As a user, I need set up the repository, so that I can restart the project with angular framework ## Acceptance Criteria - Basic working Angular App - Basic files and modules established ### Update [Required] - README - Create multiple new files ## Definition of Done - [ ] All of the required items are completed. - [ ] Approval by 1 mentor. ## Estimation 1 hour Can I be assigned this issue @Tharangi @divyanshu-rawat @Janiceilene @MeepyMay ?
priority
set up the repository with basic angular files description as a user i need set up the repository so that i can restart the project with angular framework acceptance criteria basic working angular app basic files and modules established update readme create multiple new files definition of done all of the required items are completed approval by mentor estimation hour can i be assigned this issue tharangi divyanshu rawat janiceilene meepymay
1
342,376
10,315,971,380
IssuesEvent
2019-08-30 08:54:18
conan-io/conan
https://api.github.com/repos/conan-io/conan
closed
Feature shallow clone for Git brings error (Version 1.18.1)
complex: low component: scm priority: high stage: queue type: bug
To help us debug your issue please explain: - [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md). - [x] I've specified the Conan version, operating system version and any tool that can be relevant. - [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. Hello, I am using Conan 1.18.1 on my Windows 10. I use internally `git describe` to build my version number of my package. This uses the latest tag, that has been put into git. Now with the new feature `shallow clone`, to safe time and space, I get the problem, that there are no `tags`in the history and I am unable to generate my package version. It would be very nice, to have a property `shallow` in the `scm` tool. The best would be, to set the default, to `shallow=False` to not break compatibility.
1.0
Feature shallow clone for Git brings error (Version 1.18.1) - To help us debug your issue please explain: - [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md). - [x] I've specified the Conan version, operating system version and any tool that can be relevant. - [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. Hello, I am using Conan 1.18.1 on my Windows 10. I use internally `git describe` to build my version number of my package. This uses the latest tag, that has been put into git. Now with the new feature `shallow clone`, to safe time and space, I get the problem, that there are no `tags`in the history and I am unable to generate my package version. It would be very nice, to have a property `shallow` in the `scm` tool. The best would be, to set the default, to `shallow=False` to not break compatibility.
priority
feature shallow clone for git brings error version to help us debug your issue please explain i ve read the i ve specified the conan version operating system version and any tool that can be relevant i ve explained the steps to reproduce the error or the motivation use case of the question suggestion hello i am using conan on my windows i use internally git describe to build my version number of my package this uses the latest tag that has been put into git now with the new feature shallow clone to safe time and space i get the problem that there are no tags in the history and i am unable to generate my package version it would be very nice to have a property shallow in the scm tool the best would be to set the default to shallow false to not break compatibility
1
547,859
16,048,707,176
IssuesEvent
2021-04-22 16:23:57
vaticle/typedb
https://api.github.com/repos/vaticle/typedb
closed
Inherited, overridable, scoped Roles
priority: high type: refactor
## Problem to Solve We extensively modify the behaviour of roles: 1. we scope roles to relations, allowing users to re-use names across exclusive relation hierarchies and therefore removing a large naming burden. 2. We include inheritance of roles in relations 3. We allow overriding relation roles (using the pre-existing `as`, which should make the child role playable, but block the parent role from being inherited as well) ## Proposed Solution A larger piece of work including the following work items, for release with 1.9. It will induce a breaking DB change. - [x] add role scoping to the grammar (https://github.com/graknlabs/graql/pull/140) - [ ] update Grakn to reflect the above changes, scoping roles to relation within Grakn (https://github.com/graknlabs/grakn/pull/5802) - [ ] implement role inheritance within Grakn Core - [ ] implement role overriding within Grakn Core - [ ] update the Protocol to ensure that concept type Labels have two portions: scope and name; role `relates` takes a string instead of a Role instance - [ ] propagate all changes to other clients - [ ] propagate all changes to examples Renable tests: `GraknClientIT` - testDeletingAConcept_TheConceptIsDeleted Fix: Server side implementation of RPC concept API
1.0
Inherited, overridable, scoped Roles - ## Problem to Solve We extensively modify the behaviour of roles: 1. we scope roles to relations, allowing users to re-use names across exclusive relation hierarchies and therefore removing a large naming burden. 2. We include inheritance of roles in relations 3. We allow overriding relation roles (using the pre-existing `as`, which should make the child role playable, but block the parent role from being inherited as well) ## Proposed Solution A larger piece of work including the following work items, for release with 1.9. It will induce a breaking DB change. - [x] add role scoping to the grammar (https://github.com/graknlabs/graql/pull/140) - [ ] update Grakn to reflect the above changes, scoping roles to relation within Grakn (https://github.com/graknlabs/grakn/pull/5802) - [ ] implement role inheritance within Grakn Core - [ ] implement role overriding within Grakn Core - [ ] update the Protocol to ensure that concept type Labels have two portions: scope and name; role `relates` takes a string instead of a Role instance - [ ] propagate all changes to other clients - [ ] propagate all changes to examples Renable tests: `GraknClientIT` - testDeletingAConcept_TheConceptIsDeleted Fix: Server side implementation of RPC concept API
priority
inherited overridable scoped roles problem to solve we extensively modify the behaviour of roles we scope roles to relations allowing users to re use names across exclusive relation hierarchies and therefore removing a large naming burden we include inheritance of roles in relations we allow overriding relation roles using the pre existing as which should make the child role playable but block the parent role from being inherited as well proposed solution a larger piece of work including the following work items for release with it will induce a breaking db change add role scoping to the grammar update grakn to reflect the above changes scoping roles to relation within grakn implement role inheritance within grakn core implement role overriding within grakn core update the protocol to ensure that concept type labels have two portions scope and name role relates takes a string instead of a role instance propagate all changes to other clients propagate all changes to examples renable tests graknclientit testdeletingaconcept theconceptisdeleted fix server side implementation of rpc concept api
1
270,674
8,468,383,262
IssuesEvent
2018-10-23 19:35:07
ClinGen/clincoded
https://api.github.com/repos/ClinGen/clincoded
opened
Change MONDO ID for GDM
EP request GCI R23 external colleague priority: high
"PRPS1- deafness, X-linked 6" curation need to be changed to PRPS1-Deficiency Disorders (MONDO:0100061) for this GDM: https://curation.clinicalgenome.org/curation-central/?gdm=b8931f64-ec97-4171-8163-23ded968ea42
1.0
Change MONDO ID for GDM - "PRPS1- deafness, X-linked 6" curation need to be changed to PRPS1-Deficiency Disorders (MONDO:0100061) for this GDM: https://curation.clinicalgenome.org/curation-central/?gdm=b8931f64-ec97-4171-8163-23ded968ea42
priority
change mondo id for gdm deafness x linked curation need to be changed to deficiency disorders mondo for this gdm
1
534,219
15,612,440,746
IssuesEvent
2021-03-19 15:23:24
mantidproject/mantidimaging
https://api.github.com/repos/mantidproject/mantidimaging
closed
Don't reset zoom when changing operation parameters
Component: GUI High Priority Type: Bug
### Summary It is useful to be able to adjust parameters while zoomed in to see their effect. ### Steps To Reproduce Open dataset, open operations window. Switch to median. Zoom in so that pixel details are visible. Change the kernel size ### Expected Behaviour preview updates. ### Current Behaviour preview zoom level resets ### Context reported with 2.0. also occurs on master ### Screenshot(s) ![image](https://user-images.githubusercontent.com/74248560/111164310-e09dee80-8595-11eb-8c16-2f623d3d60c3.png) Then click to adjust kernel size, and zoom resets. ![image](https://user-images.githubusercontent.com/74248560/111164419-fa3f3600-8595-11eb-92bf-f4a94e266632.png)
1.0
Don't reset zoom when changing operation parameters - ### Summary It is useful to be able to adjust parameters while zoomed in to see their effect. ### Steps To Reproduce Open dataset, open operations window. Switch to median. Zoom in so that pixel details are visible. Change the kernel size ### Expected Behaviour preview updates. ### Current Behaviour preview zoom level resets ### Context reported with 2.0. also occurs on master ### Screenshot(s) ![image](https://user-images.githubusercontent.com/74248560/111164310-e09dee80-8595-11eb-8c16-2f623d3d60c3.png) Then click to adjust kernel size, and zoom resets. ![image](https://user-images.githubusercontent.com/74248560/111164419-fa3f3600-8595-11eb-92bf-f4a94e266632.png)
priority
don t reset zoom when changing operation parameters summary it is useful to be able to adjust parameters while zoomed in to see their effect steps to reproduce open dataset open operations window switch to median zoom in so that pixel details are visible change the kernel size expected behaviour preview updates current behaviour preview zoom level resets context reported with also occurs on master screenshot s then click to adjust kernel size and zoom resets
1
719,893
24,773,255,833
IssuesEvent
2022-10-23 12:15:32
bounswe/bounswe2022group5
https://api.github.com/repos/bounswe/bounswe2022group5
closed
Creating initial mobile app structure
High Priority Type: Enhancement Status: Need Review
***Description*:** Initial flutter mobile app structure should be created and pushed to the repository. ***Todo's*:** - [x] Initial screens should be created. - [x] Initial model structure should be created. (Later it can be changed according to backend team's decisions.) - [x] This structure should be pushed to the repo. ***Reviewers*:** @enginoguzhansenol ***Task Deadline*:** 23.10.2022 14:00 ***Review Deadline*:** 23.10.2022 15:00
1.0
Creating initial mobile app structure - ***Description*:** Initial flutter mobile app structure should be created and pushed to the repository. ***Todo's*:** - [x] Initial screens should be created. - [x] Initial model structure should be created. (Later it can be changed according to backend team's decisions.) - [x] This structure should be pushed to the repo. ***Reviewers*:** @enginoguzhansenol ***Task Deadline*:** 23.10.2022 14:00 ***Review Deadline*:** 23.10.2022 15:00
priority
creating initial mobile app structure description initial flutter mobile app structure should be created and pushed to the repository todo s initial screens should be created initial model structure should be created later it can be changed according to backend team s decisions this structure should be pushed to the repo reviewers enginoguzhansenol task deadline review deadline
1
799,706
28,312,336,886
IssuesEvent
2023-04-10 16:30:17
ggerganov/llama.cpp
https://api.github.com/repos/ggerganov/llama.cpp
closed
Fix quantize_row_q4_1() with ARM_NEON
bug high priority
It is currently bugged. See results of `quantize-stats` on M1: ``` $ ./quantize-stats -m models/7B/ggml-model-f16.bin Loading model llama.cpp: loading model from models/7B/ggml-model-f16.bin llama_model_load_internal: format = ggjt v1 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 256 llama_model_load_internal: n_embd = 4096 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 32 llama_model_load_internal: n_layer = 32 llama_model_load_internal: n_rot = 128 llama_model_load_internal: f16 = 1 llama_model_load_internal: n_ff = 11008 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 7B llama_model_load_internal: ggml ctx size = 59.11 KB llama_model_load_internal: mem required = 14645.07 MB (+ 2052.00 MB per state) llama_init_from_file: kv self size = 256.00 MB note: source model is f16 testing 291 layers with max size 131072000 q4_0 : rmse 0.00222150, maxerr 0.18429124, 95pct<0.0040, median<0.0018 q4_1 : rmse 0.00360044, maxerr 0.26373291, 95pct<0.0066, median<0.0028 main: total time = 93546.68 ms ``` The RMSE is too high - worse than Q4_0. There is a bug in the following piece of code: https://github.com/ggerganov/llama.cpp/blob/180b693a47b6b825288ef9f2c39d24b6eea4eea6/ggml.c#L922-L955 We should fix it
1.0
Fix quantize_row_q4_1() with ARM_NEON - It is currently bugged. See results of `quantize-stats` on M1: ``` $ ./quantize-stats -m models/7B/ggml-model-f16.bin Loading model llama.cpp: loading model from models/7B/ggml-model-f16.bin llama_model_load_internal: format = ggjt v1 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 256 llama_model_load_internal: n_embd = 4096 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 32 llama_model_load_internal: n_layer = 32 llama_model_load_internal: n_rot = 128 llama_model_load_internal: f16 = 1 llama_model_load_internal: n_ff = 11008 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 7B llama_model_load_internal: ggml ctx size = 59.11 KB llama_model_load_internal: mem required = 14645.07 MB (+ 2052.00 MB per state) llama_init_from_file: kv self size = 256.00 MB note: source model is f16 testing 291 layers with max size 131072000 q4_0 : rmse 0.00222150, maxerr 0.18429124, 95pct<0.0040, median<0.0018 q4_1 : rmse 0.00360044, maxerr 0.26373291, 95pct<0.0066, median<0.0028 main: total time = 93546.68 ms ``` The RMSE is too high - worse than Q4_0. There is a bug in the following piece of code: https://github.com/ggerganov/llama.cpp/blob/180b693a47b6b825288ef9f2c39d24b6eea4eea6/ggml.c#L922-L955 We should fix it
priority
fix quantize row with arm neon it is currently bugged see results of quantize stats on quantize stats m models ggml model bin loading model llama cpp loading model from models ggml model bin llama model load internal format ggjt latest llama model load internal n vocab llama model load internal n ctx llama model load internal n embd llama model load internal n mult llama model load internal n head llama model load internal n layer llama model load internal n rot llama model load internal llama model load internal n ff llama model load internal n parts llama model load internal model size llama model load internal ggml ctx size kb llama model load internal mem required mb mb per state llama init from file kv self size mb note source model is testing layers with max size rmse maxerr median rmse maxerr median main total time ms the rmse is too high worse than there is a bug in the following piece of code we should fix it
1
340,889
10,279,592,704
IssuesEvent
2019-08-26 00:39:54
okTurtles/group-income-simple
https://api.github.com/repos/okTurtles/group-income-simple
closed
Bugs in GroupsList (group switcher)
App:Frontend Kind:Bug Level:Starter Note:Up-for-grabs Priority:High
### Problem - Upon switching to another group, the group icon isn't updated - Upon switching to another group, it doesn't seem to let you switch back to the first group Also, possibly unrelated, there is log spam along these lines: ``` [Vue tip]: Prop "itemid" is passed to component <Anonymous>, but the declared prop name is "itemId". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "item-id" instead of "itemId". vue.common.dev.js:636:14 [Vue tip]: Prop "hasdivider" is passed to component <Anonymous>, but the declared prop name is "hasDivider". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "has-divider" instead of "hasDivider". vue.common.dev.js:636:14 [Vue tip]: Prop "disableradius" is passed to component <Anonymous>, but the declared prop name is "disableRadius". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "disable-radius" instead of "disableRadius". vue.common.dev.js:636:14 [Vue tip]: Prop "isactive" is passed to component <Anonymous>, but the declared prop name is "isActive". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "is-active" instead of "isActive". ``` ### Solution Fix
1.0
Bugs in GroupsList (group switcher) - ### Problem - Upon switching to another group, the group icon isn't updated - Upon switching to another group, it doesn't seem to let you switch back to the first group Also, possibly unrelated, there is log spam along these lines: ``` [Vue tip]: Prop "itemid" is passed to component <Anonymous>, but the declared prop name is "itemId". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "item-id" instead of "itemId". vue.common.dev.js:636:14 [Vue tip]: Prop "hasdivider" is passed to component <Anonymous>, but the declared prop name is "hasDivider". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "has-divider" instead of "hasDivider". vue.common.dev.js:636:14 [Vue tip]: Prop "disableradius" is passed to component <Anonymous>, but the declared prop name is "disableRadius". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "disable-radius" instead of "disableRadius". vue.common.dev.js:636:14 [Vue tip]: Prop "isactive" is passed to component <Anonymous>, but the declared prop name is "isActive". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "is-active" instead of "isActive". ``` ### Solution Fix
priority
bugs in groupslist group switcher problem upon switching to another group the group icon isn t updated upon switching to another group it doesn t seem to let you switch back to the first group also possibly unrelated there is log spam along these lines prop itemid is passed to component but the declared prop name is itemid note that html attributes are case insensitive and camelcased props need to use their kebab case equivalents when using in dom templates you should probably use item id instead of itemid vue common dev js prop hasdivider is passed to component but the declared prop name is hasdivider note that html attributes are case insensitive and camelcased props need to use their kebab case equivalents when using in dom templates you should probably use has divider instead of hasdivider vue common dev js prop disableradius is passed to component but the declared prop name is disableradius note that html attributes are case insensitive and camelcased props need to use their kebab case equivalents when using in dom templates you should probably use disable radius instead of disableradius vue common dev js prop isactive is passed to component but the declared prop name is isactive note that html attributes are case insensitive and camelcased props need to use their kebab case equivalents when using in dom templates you should probably use is active instead of isactive solution fix
1
611,996
18,988,373,184
IssuesEvent
2021-11-22 01:59:04
co-cart/co-cart
https://api.github.com/repos/co-cart/co-cart
closed
Custom Cart Item Data not returned, when added to cart
bug:confirmed priority:high good first issue status:changelog added API: Cart
## Describe the bug When I add an item to cart with custom `item_data` and set `return_item` to true, the cart Item is returned, but `cart_item_data` is empty. When the cartItem is fetched (`wp-json/cocart/v2/cart/items`), the `cart_item_data` property is returned correctly. ## Prerequisites <!-- Mark completed items with an [x] --> - [x ] I have searched for similar issues in both open and closed tickets and cannot find a duplicate. - [ x] The issue still exists against the latest `master` branch of CoCart on GitHub (this is **not** the same version as on WordPress.org!) - [x ] I have attempted to find the simplest possible steps to reproduce the issue. - [ ] I have included a failing test as a pull request (Optional) - [x ] I have installed the requirements to run this plugin. ## Steps to reproduce the issue <!-- I need to be able to reproduce the bug in order to fix it so please be descriptive! --> 1. Submit an add-to-cart request [POST] `wp-json/cocart/v2/cart/add-item` ``` { "id":"<simple_product_id>", "return_item":true, "item_data": { "my_test_data":"my-test-value" } } ``` 2. Check the response: `cart_item_data` property is empty 3. Fetch the item via `wp-json/cocart/v2/cart/items`: `cart_item_data` is correct ## Expected/actual behaviour `cart_item_data` should be retunred with values after the add to cart request. ## Isolating the problem <!-- Mark completed items with an [x] --> - [ x] This bug happens with only WooCommerce and CoCart plugin are active. - [ x] This bug happens with a default WordPress theme active. - [ ] This bug happens with the WordPress theme Storefront active. - [x ] This bug happens with the latest release of WooCommerce active. - [ ] This bug happens only when I authenticate as a customer. - [ ] This bug happens only when I authenticate as administrator. - [x ] I can reproduce this bug consistently using the steps above.
1.0
Custom Cart Item Data not returned, when added to cart - ## Describe the bug When I add an item to cart with custom `item_data` and set `return_item` to true, the cart Item is returned, but `cart_item_data` is empty. When the cartItem is fetched (`wp-json/cocart/v2/cart/items`), the `cart_item_data` property is returned correctly. ## Prerequisites <!-- Mark completed items with an [x] --> - [x ] I have searched for similar issues in both open and closed tickets and cannot find a duplicate. - [ x] The issue still exists against the latest `master` branch of CoCart on GitHub (this is **not** the same version as on WordPress.org!) - [x ] I have attempted to find the simplest possible steps to reproduce the issue. - [ ] I have included a failing test as a pull request (Optional) - [x ] I have installed the requirements to run this plugin. ## Steps to reproduce the issue <!-- I need to be able to reproduce the bug in order to fix it so please be descriptive! --> 1. Submit an add-to-cart request [POST] `wp-json/cocart/v2/cart/add-item` ``` { "id":"<simple_product_id>", "return_item":true, "item_data": { "my_test_data":"my-test-value" } } ``` 2. Check the response: `cart_item_data` property is empty 3. Fetch the item via `wp-json/cocart/v2/cart/items`: `cart_item_data` is correct ## Expected/actual behaviour `cart_item_data` should be retunred with values after the add to cart request. ## Isolating the problem <!-- Mark completed items with an [x] --> - [ x] This bug happens with only WooCommerce and CoCart plugin are active. - [ x] This bug happens with a default WordPress theme active. - [ ] This bug happens with the WordPress theme Storefront active. - [x ] This bug happens with the latest release of WooCommerce active. - [ ] This bug happens only when I authenticate as a customer. - [ ] This bug happens only when I authenticate as administrator. - [x ] I can reproduce this bug consistently using the steps above.
priority
custom cart item data not returned when added to cart describe the bug when i add an item to cart with custom item data and set return item to true the cart item is returned but cart item data is empty when the cartitem is fetched wp json cocart cart items the cart item data property is returned correctly prerequisites i have searched for similar issues in both open and closed tickets and cannot find a duplicate the issue still exists against the latest master branch of cocart on github this is not the same version as on wordpress org i have attempted to find the simplest possible steps to reproduce the issue i have included a failing test as a pull request optional i have installed the requirements to run this plugin steps to reproduce the issue submit an add to cart request wp json cocart cart add item id return item true item data my test data my test value check the response cart item data property is empty fetch the item via wp json cocart cart items cart item data is correct expected actual behaviour cart item data should be retunred with values after the add to cart request isolating the problem this bug happens with only woocommerce and cocart plugin are active this bug happens with a default wordpress theme active this bug happens with the wordpress theme storefront active this bug happens with the latest release of woocommerce active this bug happens only when i authenticate as a customer this bug happens only when i authenticate as administrator i can reproduce this bug consistently using the steps above
1
238,579
7,781,017,657
IssuesEvent
2018-06-05 22:03:52
brijeshshah13/OpenCI
https://api.github.com/repos/brijeshshah13/OpenCI
closed
Implement Logout functionality
Priority: High Status: Completed
**Is it a ?** - [ ] Bug Report - [x] Feature Request - [ ] Chore ### Feature Request **Feature Description** Log out the user upon click on `Logout`. **Would you like to work on the issue?** Yes.
1.0
Implement Logout functionality - **Is it a ?** - [ ] Bug Report - [x] Feature Request - [ ] Chore ### Feature Request **Feature Description** Log out the user upon click on `Logout`. **Would you like to work on the issue?** Yes.
priority
implement logout functionality is it a bug report feature request chore feature request feature description log out the user upon click on logout would you like to work on the issue yes
1
316,582
9,651,976,122
IssuesEvent
2019-05-18 13:08:38
the-blue-alliance/the-blue-alliance-ios
https://api.github.com/repos/the-blue-alliance/the-blue-alliance-ios
closed
Fix TeamSummaryViewController.tableView(_:heightForRowAt:)
bug high priority
``` Crashed: com.apple.main-thread 0 The Blue Alliance 0x100c5efa4 @objc TeamSummaryViewController.tableView(_:heightForRowAt:) (<compiler-generated>) 1 UIKitCore 0x24383c024 -[UITableView _classicHeightForRowAtIndexPath:] + 164 2 UIKitCore 0x24383c258 -[UITableView _heightForCell:atIndexPath:] + 180 3 UIKitCore 0x243823bcc __53-[UITableView _configureCellForDisplay:forIndexPath:]_block_invoke + 3192 4 UIKitCore 0x243a945d4 +[UIView(Animation) performWithoutAnimation:] + 104 5 UIKitCore 0x243822e70 -[UITableView _configureCellForDisplay:forIndexPath:] + 248 6 UIKitCore 0x243834ad8 -[UITableView _createPreparedCellForGlobalRow:withIndexPath:willDisplay:] + 840 7 UIKitCore 0x243834f38 -[UITableView _createPreparedCellForGlobalRow:willDisplay:] + 80 8 UIKitCore 0x243801740 -[UITableView _updateVisibleCellsNow:isRecursive:] + 2260 9 UIKitCore 0x24381ea60 -[UITableView layoutSubviews] + 140 10 UIKitCore 0x243aa1e54 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1292 11 QuartzCore 0x21b6721f0 -[CALayer layoutSublayers] + 184 12 QuartzCore 0x21b677198 CA::Layer::layout_if_needed(CA::Transaction*) + 332 13 QuartzCore 0x21b5da0a8 CA::Context::commit_transaction(CA::Transaction*) + 348 14 QuartzCore 0x21b608108 CA::Transaction::commit() + 640 15 UIKitCore 0x2436407b0 _afterCACommitHandler + 224 16 CoreFoundation 0x21717589c __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 32 17 CoreFoundation 0x2171705c4 __CFRunLoopDoObservers + 412 18 CoreFoundation 0x217170b40 __CFRunLoopRun + 1228 19 CoreFoundation 0x217170354 CFRunLoopRunSpecific + 436 20 GraphicsServices 0x21937079c GSEventRunModal + 104 21 UIKitCore 0x243619b68 UIApplicationMain + 212 22 The Blue Alliance 0x100bde018 main (main.swift:9) 23 libdyld.dylib 0x216c368e0 start + 4 ```
1.0
Fix TeamSummaryViewController.tableView(_:heightForRowAt:) - ``` Crashed: com.apple.main-thread 0 The Blue Alliance 0x100c5efa4 @objc TeamSummaryViewController.tableView(_:heightForRowAt:) (<compiler-generated>) 1 UIKitCore 0x24383c024 -[UITableView _classicHeightForRowAtIndexPath:] + 164 2 UIKitCore 0x24383c258 -[UITableView _heightForCell:atIndexPath:] + 180 3 UIKitCore 0x243823bcc __53-[UITableView _configureCellForDisplay:forIndexPath:]_block_invoke + 3192 4 UIKitCore 0x243a945d4 +[UIView(Animation) performWithoutAnimation:] + 104 5 UIKitCore 0x243822e70 -[UITableView _configureCellForDisplay:forIndexPath:] + 248 6 UIKitCore 0x243834ad8 -[UITableView _createPreparedCellForGlobalRow:withIndexPath:willDisplay:] + 840 7 UIKitCore 0x243834f38 -[UITableView _createPreparedCellForGlobalRow:willDisplay:] + 80 8 UIKitCore 0x243801740 -[UITableView _updateVisibleCellsNow:isRecursive:] + 2260 9 UIKitCore 0x24381ea60 -[UITableView layoutSubviews] + 140 10 UIKitCore 0x243aa1e54 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1292 11 QuartzCore 0x21b6721f0 -[CALayer layoutSublayers] + 184 12 QuartzCore 0x21b677198 CA::Layer::layout_if_needed(CA::Transaction*) + 332 13 QuartzCore 0x21b5da0a8 CA::Context::commit_transaction(CA::Transaction*) + 348 14 QuartzCore 0x21b608108 CA::Transaction::commit() + 640 15 UIKitCore 0x2436407b0 _afterCACommitHandler + 224 16 CoreFoundation 0x21717589c __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 32 17 CoreFoundation 0x2171705c4 __CFRunLoopDoObservers + 412 18 CoreFoundation 0x217170b40 __CFRunLoopRun + 1228 19 CoreFoundation 0x217170354 CFRunLoopRunSpecific + 436 20 GraphicsServices 0x21937079c GSEventRunModal + 104 21 UIKitCore 0x243619b68 UIApplicationMain + 212 22 The Blue Alliance 0x100bde018 main (main.swift:9) 23 libdyld.dylib 0x216c368e0 start + 4 ```
priority
fix teamsummaryviewcontroller tableview heightforrowat crashed com apple main thread the blue alliance objc teamsummaryviewcontroller tableview heightforrowat uikitcore uikitcore uikitcore block invoke uikitcore uikitcore uikitcore uikitcore uikitcore uikitcore uikitcore quartzcore quartzcore ca layer layout if needed ca transaction quartzcore ca context commit transaction ca transaction quartzcore ca transaction commit uikitcore aftercacommithandler corefoundation cfrunloop is calling out to an observer callback function corefoundation cfrunloopdoobservers corefoundation cfrunlooprun corefoundation cfrunlooprunspecific graphicsservices gseventrunmodal uikitcore uiapplicationmain the blue alliance main main swift libdyld dylib start
1
695,681
23,868,289,464
IssuesEvent
2022-09-07 12:58:46
PNNL-CompBio/Snekmer
https://api.github.com/repos/PNNL-CompBio/Snekmer
closed
Speed issue
bug HighPriority
For even moderately sized input files (5k, e.g.) kmerize is taking a long time (hour+) which is way too long. The problem was introduced by the previous fix for the memory issue, and it's in the vectorize.py/make_feature_matrix function, which is using a very slow way of constructing a matrix from individual lists.
1.0
Speed issue - For even moderately sized input files (5k, e.g.) kmerize is taking a long time (hour+) which is way too long. The problem was introduced by the previous fix for the memory issue, and it's in the vectorize.py/make_feature_matrix function, which is using a very slow way of constructing a matrix from individual lists.
priority
speed issue for even moderately sized input files e g kmerize is taking a long time hour which is way too long the problem was introduced by the previous fix for the memory issue and it s in the vectorize py make feature matrix function which is using a very slow way of constructing a matrix from individual lists
1
783,036
27,516,207,504
IssuesEvent
2023-03-06 12:09:42
bounswe/bounswe2023group5
https://api.github.com/repos/bounswe/bounswe2023group5
closed
Setting Up Communication Channel
Priority: High Type: Feature Status: Done
Establishing a communication channel to talk about tasks, make weekly plans and distribute the tasks in the team etc.
1.0
Setting Up Communication Channel - Establishing a communication channel to talk about tasks, make weekly plans and distribute the tasks in the team etc.
priority
setting up communication channel establishing a communication channel to talk about tasks make weekly plans and distribute the tasks in the team etc
1
62,107
3,172,078,277
IssuesEvent
2015-09-23 04:37:10
DynamoRIO/drmemory
https://api.github.com/repos/DynamoRIO/drmemory
closed
CRASH destroy_Rtl_heap() on chrome unit_tests
Bug-ToolCrash Component-FullMode Hotlist-Release OpSys-Win8.1 Priority-High Status-CannotReproduce Status-NeedInfo
Xref https://code.google.com/p/chromium/issues/detail?id=526903 When trying to repro on Win8.1, dev hit this crash: ``` [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from ChromeStabilityMetricsProviderTest [ RUN ] ChromeStabilityMetricsProviderTest.BrowserChildProcessObserver [ OK ] ChromeStabilityMetricsProviderTest.BrowserChildProcessObserver (6077 ms) [ RUN ] ChromeStabilityMetricsProviderTest.NotificationObserver [ OK ] ChromeStabilityMetricsProviderTest.NotificationObserver (379 ms) [----------] 2 tests from ChromeStabilityMetricsProviderTest (6647 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (6728 ms total) [ PASSED ] 2 tests. <Application d:\src\gclient\src\out\Release\unit_tests.exe (940). Dr. Memory internal crash at PC 0x738348ca. Please report this at http://drmemory.org/issues. Program aborted. 0xc0000005 0x00000000 0x738348ca 0x738348ca 0x00000000 0x08e9002c Base: 0x5cb50000 Registers: eax=0x23779c30 ebx=0x08e90000 ecx=0x2375e024 edx=0x08e90000 esi=0x00000000 edi=0x00000001 esp=0x21d8ed40 ebp=0x21d8ed64 eflags=0x000 1.9.0-0-(Aug 28 2015 22:56:18) win63 -no_dynamic_options -disasm_mask 8 -logdir 'C:\Users\wfh\AppData\LocalLow\vg_logs_bb0ii3\dynamorio' -client_lib 'd:\src\gclient\src\third_party\drmemory\unpacked\bin\release\drmemorylib.dll;0;-suppress `d:\src\gclient\src\tools\valgrind\drmemory\suppressions.txt` -suppress `d:\src\gclient\src\tools\valgrind\drmemory\supp 0x21d8ed64 0x738351a0 0x21d8edc8 0x21d47680 0x5cbfb52b 0x01b05d5e d:\src\gclient\src\third_party\drmemory\unpacked\bin\release\drmemorylib.dll=0x73800000 d:\src\gclient\src\third_party\drmemory\unpacked\bin\release/dbghelp.dll=0x5ca30000 C:\Windows/system32/msvcrt.dll=0x067c0000 C:\Windows/system32/kernel32.dll=0x08070000 C:\Windows/system32/KERNELBASE.dll=0x081b0000> ~~Dr.M~~ WARNING: application exited with abnormal code 0xffffffff ``` 05:23 PM ~/drmemory/releases/DrMemory-Windows-1.9.0-RC1 % bin/symquery.exe -e bin/release/drmemorylib.dll -f -a 0x348ca destroy_Rtl_heap+0xa d:\drmemory_package\common\alloc_replace.c:2867+0x3 % bin/symquery.exe -e bin/release/drmemorylib.dll -f -a 0x351a0 alloc_replace_exit+0xe0 d:\drmemory_package\common\alloc_replace.c:4758+0x10 Same crash with the DrMem ver in chromium third_party. With debug ends up seeing an assert: ``` ASSERT FAILURE (thread 7324): d:\drmemory_package\common\alloc_replace.c:4337: in_table (pre-us libc missed in heap walk) ```
1.0
CRASH destroy_Rtl_heap() on chrome unit_tests - Xref https://code.google.com/p/chromium/issues/detail?id=526903 When trying to repro on Win8.1, dev hit this crash: ``` [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from ChromeStabilityMetricsProviderTest [ RUN ] ChromeStabilityMetricsProviderTest.BrowserChildProcessObserver [ OK ] ChromeStabilityMetricsProviderTest.BrowserChildProcessObserver (6077 ms) [ RUN ] ChromeStabilityMetricsProviderTest.NotificationObserver [ OK ] ChromeStabilityMetricsProviderTest.NotificationObserver (379 ms) [----------] 2 tests from ChromeStabilityMetricsProviderTest (6647 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (6728 ms total) [ PASSED ] 2 tests. <Application d:\src\gclient\src\out\Release\unit_tests.exe (940). Dr. Memory internal crash at PC 0x738348ca. Please report this at http://drmemory.org/issues. Program aborted. 0xc0000005 0x00000000 0x738348ca 0x738348ca 0x00000000 0x08e9002c Base: 0x5cb50000 Registers: eax=0x23779c30 ebx=0x08e90000 ecx=0x2375e024 edx=0x08e90000 esi=0x00000000 edi=0x00000001 esp=0x21d8ed40 ebp=0x21d8ed64 eflags=0x000 1.9.0-0-(Aug 28 2015 22:56:18) win63 -no_dynamic_options -disasm_mask 8 -logdir 'C:\Users\wfh\AppData\LocalLow\vg_logs_bb0ii3\dynamorio' -client_lib 'd:\src\gclient\src\third_party\drmemory\unpacked\bin\release\drmemorylib.dll;0;-suppress `d:\src\gclient\src\tools\valgrind\drmemory\suppressions.txt` -suppress `d:\src\gclient\src\tools\valgrind\drmemory\supp 0x21d8ed64 0x738351a0 0x21d8edc8 0x21d47680 0x5cbfb52b 0x01b05d5e d:\src\gclient\src\third_party\drmemory\unpacked\bin\release\drmemorylib.dll=0x73800000 d:\src\gclient\src\third_party\drmemory\unpacked\bin\release/dbghelp.dll=0x5ca30000 C:\Windows/system32/msvcrt.dll=0x067c0000 C:\Windows/system32/kernel32.dll=0x08070000 C:\Windows/system32/KERNELBASE.dll=0x081b0000> ~~Dr.M~~ WARNING: application exited with abnormal code 0xffffffff ``` 05:23 PM ~/drmemory/releases/DrMemory-Windows-1.9.0-RC1 % bin/symquery.exe -e bin/release/drmemorylib.dll -f -a 0x348ca destroy_Rtl_heap+0xa d:\drmemory_package\common\alloc_replace.c:2867+0x3 % bin/symquery.exe -e bin/release/drmemorylib.dll -f -a 0x351a0 alloc_replace_exit+0xe0 d:\drmemory_package\common\alloc_replace.c:4758+0x10 Same crash with the DrMem ver in chromium third_party. With debug ends up seeing an assert: ``` ASSERT FAILURE (thread 7324): d:\drmemory_package\common\alloc_replace.c:4337: in_table (pre-us libc missed in heap walk) ```
priority
crash destroy rtl heap on chrome unit tests xref when trying to repro on dev hit this crash running tests from test case global test environment set up tests from chromestabilitymetricsprovidertest chromestabilitymetricsprovidertest browserchildprocessobserver chromestabilitymetricsprovidertest browserchildprocessobserver ms chromestabilitymetricsprovidertest notificationobserver chromestabilitymetricsprovidertest notificationobserver ms tests from chromestabilitymetricsprovidertest ms total global test environment tear down tests from test case ran ms total tests application d src gclient src out release unit tests exe dr memory internal crash at pc please report this at program aborted base registers eax ebx ecx edx esi edi esp ebp eflags aug no dynamic options disasm mask logdir c users wfh appdata locallow vg logs dynamorio client lib d src gclient src third party drmemory unpacked bin release drmemorylib dll suppress d src gclient src tools valgrind drmemory suppressions txt suppress d src gclient src tools valgrind drmemory supp d src gclient src third party drmemory unpacked bin release drmemorylib dll d src gclient src third party drmemory unpacked bin release dbghelp dll c windows msvcrt dll c windows dll c windows kernelbase dll dr m warning application exited with abnormal code pm drmemory releases drmemory windows bin symquery exe e bin release drmemorylib dll f a destroy rtl heap d drmemory package common alloc replace c bin symquery exe e bin release drmemorylib dll f a alloc replace exit d drmemory package common alloc replace c same crash with the drmem ver in chromium third party with debug ends up seeing an assert assert failure thread d drmemory package common alloc replace c in table pre us libc missed in heap walk
1
378,698
11,206,659,029
IssuesEvent
2020-01-05 23:01:42
Kipjr/ldap_login
https://api.github.com/repos/Kipjr/ldap_login
closed
[Vulnerability] data.dat exposed
Priority: High Section: Security Status: Confirmed Status: Review Needed Type: Bug
In a normal Piwigo installation, the plugin folders are not blocked. Therefore everyone can view every file in the directories. This is especially serious for the `data.dat` file which may contain sensible data like an AD password. To see if you are affected, open http(s)://<your_piwigo_installation>/plugins/Ldap_Login/data.dat in your browser. As a workaround you should advise users to block the access to the file (or the whole plugins(/Ldap_Login) directory) in the server settings or with an .htaccess file. A better way would be to store the plugin settings in a php file which will be interpreted by the server and not just displayed.
1.0
[Vulnerability] data.dat exposed - In a normal Piwigo installation, the plugin folders are not blocked. Therefore everyone can view every file in the directories. This is especially serious for the `data.dat` file which may contain sensible data like an AD password. To see if you are affected, open http(s)://<your_piwigo_installation>/plugins/Ldap_Login/data.dat in your browser. As a workaround you should advise users to block the access to the file (or the whole plugins(/Ldap_Login) directory) in the server settings or with an .htaccess file. A better way would be to store the plugin settings in a php file which will be interpreted by the server and not just displayed.
priority
data dat exposed in a normal piwigo installation the plugin folders are not blocked therefore everyone can view every file in the directories this is especially serious for the data dat file which may contain sensible data like an ad password to see if you are affected open http s plugins ldap login data dat in your browser as a workaround you should advise users to block the access to the file or the whole plugins ldap login directory in the server settings or with an htaccess file a better way would be to store the plugin settings in a php file which will be interpreted by the server and not just displayed
1
680,360
23,267,265,918
IssuesEvent
2022-08-04 18:41:25
Yoooi0/MultiFunPlayer
https://api.github.com/repos/Yoooi0/MultiFunPlayer
closed
Motion providers are causing bad IsDirty flag when axis is auto-homing
bug priority-high
When axis starts to auto-home it causes motion provider update method to return true dirty flag on next update tick even if its idle, which then breaks auto-home: https://github.com/Yoooi0/MultiFunPlayer/blob/349f7a0e9096f2f8e9a6a60ca585988d5dc4cc06/MultiFunPlayer/UI/Controls/ViewModels/ScriptViewModel.cs#L381 Probably need to redesign the whole update loop/context to separate values/dirty flags.
1.0
Motion providers are causing bad IsDirty flag when axis is auto-homing - When axis starts to auto-home it causes motion provider update method to return true dirty flag on next update tick even if its idle, which then breaks auto-home: https://github.com/Yoooi0/MultiFunPlayer/blob/349f7a0e9096f2f8e9a6a60ca585988d5dc4cc06/MultiFunPlayer/UI/Controls/ViewModels/ScriptViewModel.cs#L381 Probably need to redesign the whole update loop/context to separate values/dirty flags.
priority
motion providers are causing bad isdirty flag when axis is auto homing when axis starts to auto home it causes motion provider update method to return true dirty flag on next update tick even if its idle which then breaks auto home probably need to redesign the whole update loop context to separate values dirty flags
1
789,307
27,786,157,871
IssuesEvent
2023-03-17 03:31:29
AY2223S2-CS2103T-F12-2/tp
https://api.github.com/repos/AY2223S2-CS2103T-F12-2/tp
closed
As a student, I can find contacts by their modules
type.Story priority.High
so I can ask them what to expect when I take the modules or form teams/study with them if they are taking similar modules currently.
1.0
As a student, I can find contacts by their modules - so I can ask them what to expect when I take the modules or form teams/study with them if they are taking similar modules currently.
priority
as a student i can find contacts by their modules so i can ask them what to expect when i take the modules or form teams study with them if they are taking similar modules currently
1
668,914
22,603,632,042
IssuesEvent
2022-06-29 11:22:44
Skaant/rimarok-2
https://api.github.com/repos/Skaant/rimarok-2
closed
Typo : font-size base increase + headers margins
⚡ high priority smallie
- [ ] Increase Bootstrap base font size variable - [ ] Add base margins to headers range
1.0
Typo : font-size base increase + headers margins - - [ ] Increase Bootstrap base font size variable - [ ] Add base margins to headers range
priority
typo font size base increase headers margins increase bootstrap base font size variable add base margins to headers range
1
248,118
7,927,597,986
IssuesEvent
2018-07-06 08:37:35
Signbank/Global-signbank
https://api.github.com/repos/Signbank/Global-signbank
closed
Cannot upload videos for sign
ASL bug high priority
Looks like you cannot add videos for https://aslsignbank.haskins.yale.edu//dictionary/gloss/1751/ I'm guessing the problem is the fact that lemma ID gloss is only 1 char. EDIT: indeed, the videos are stored in glossvideo/I- , but the code that collects the video probably does not look there.
1.0
Cannot upload videos for sign - Looks like you cannot add videos for https://aslsignbank.haskins.yale.edu//dictionary/gloss/1751/ I'm guessing the problem is the fact that lemma ID gloss is only 1 char. EDIT: indeed, the videos are stored in glossvideo/I- , but the code that collects the video probably does not look there.
priority
cannot upload videos for sign looks like you cannot add videos for i m guessing the problem is the fact that lemma id gloss is only char edit indeed the videos are stored in glossvideo i but the code that collects the video probably does not look there
1
763,957
26,779,317,746
IssuesEvent
2023-01-31 19:44:32
zulip/zulip-mobile
https://api.github.com/repos/zulip/zulip-mobile
closed
[iOS] Links get double-%-encoded, breaking downloads
help wanted a-iOS a-message list P1 high-priority
The effect of this is that a similar symptom to #3303 is still live on iOS. But the cause is unrelated, so making this a separate issue. Originally described at https://github.com/zulip/zulip-mobile/pull/4089#issuecomment-639153366 and https://github.com/zulip/zulip-mobile/pull/4089#issuecomment-639195011 , just before I merged #4089 fixing #3303 . To reproduce: * Went to message list, found a message with an upload that wasn't an image. (To do that: in the webapp searched for `has:attachment`, and scrolled through history to find one that wasn't an image.) Specifically the file happened to be a PDF, with a `.pdf` extension on its name. * Hit the link. Got a browser view. But after a few seconds of loading, the result was an error: ![Simulator Screen Shot - iPhone 8 - 2020-06-04 at 15 32 23](https://user-images.githubusercontent.com/28173/83817031-9c7dd500-a678-11ea-9af2-2a31bde23c07.png) That appears to be from S3 directly, because that's in the location bar at the top. The error message begins: > SignatureDoesNotMatch The request signature we calculated does not match the signature you provided. Check your key and signing method. [... then a bunch of details ...] * Ditto on a second try. I watched the timing closely this time, and it was only about a second between me tapping the link and the error message appearing. So it's not an expiration issue -- there's something else wrong. On further investigation, I have a partial diagnosis: * From that error page, I hit the "share" icon and chose "Copy", to get the URL onto the clipboard. Then went and pasted it elsewhere (the compose box, as the handiest place.) Here it is: `https://zulip-uploads.s3.amazonaws.com/1230/-Opc2L055IYelpraPb1oRDeU/sagas.pdf?Signature=2mpNGb0ysKFpVN8bkkMVsulQSVE%253D&Expires=1591317724&AWSAccessKeyId=AKIAIEVMBCAT2WD3M5KQ` * I went and pulled up the same upload in the webapp, to compare. (It works fine there.) Here's the URL I find in the location bar there: `https://zulip-uploads.s3.amazonaws.com/1230/-Opc2L055IYelpraPb1oRDeU/sagas.pdf?Signature=xipp%2FD69nk89xmkKha3cx6K%2FSSg%3D&Expires=1591317779&AWSAccessKeyId=AKIAIEVMBCAT2WD3M5KQ` * I think the problem is that `%253D`. That's the percent-encoding of `%3D`, which is itself the percent-encoding of `=`. Note there's a `%3D` at the end of the `Signature` query-parameter in the successful URL. Both signatures look like base64, which very often ends with a `=` as padding. * So it seems like we're double-encoding the URL, and as a result the decoding of it has a signature ending in `%3D` instead of in `=` and the signature doesn't validate. Looking at the code, it's clear where that's happening -- in `src/utils/openLink.js`, just in the iOS branch, we call `encodeURI` on the URL. That sure will turn a `%3D` into a `%253D`. Unfortunately it's going to be a bit trickier than just removing that call, because it was put there to fix another bug: 66a9e9d1a15e149ebdfa5a2e9cca06aaa3f29835 (#3507) fixed #3315. So it seems like we need to, in `openLink` on iOS before passing the URL to `SafariView.show`: * %-encode non-ASCII characters -- that's #3315; * but *not* %-encode `%` itself, instead leave it alone; * and presumably also leave alone all the other characters that `encodeURI` leaves alone, like `a` and `Z` and `/`; * and it's not clear if we should %-encode the remaining characters that `encodeURI` affects, like ` ` and `"` and a bunch of other punctuation. A key step in fixing this is going to be just end-to-end testing: make a URL filled with a ton of these characters, post it in a Zulip message, try following that link, and see what URL actually comes through.
1.0
[iOS] Links get double-%-encoded, breaking downloads - The effect of this is that a similar symptom to #3303 is still live on iOS. But the cause is unrelated, so making this a separate issue. Originally described at https://github.com/zulip/zulip-mobile/pull/4089#issuecomment-639153366 and https://github.com/zulip/zulip-mobile/pull/4089#issuecomment-639195011 , just before I merged #4089 fixing #3303 . To reproduce: * Went to message list, found a message with an upload that wasn't an image. (To do that: in the webapp searched for `has:attachment`, and scrolled through history to find one that wasn't an image.) Specifically the file happened to be a PDF, with a `.pdf` extension on its name. * Hit the link. Got a browser view. But after a few seconds of loading, the result was an error: ![Simulator Screen Shot - iPhone 8 - 2020-06-04 at 15 32 23](https://user-images.githubusercontent.com/28173/83817031-9c7dd500-a678-11ea-9af2-2a31bde23c07.png) That appears to be from S3 directly, because that's in the location bar at the top. The error message begins: > SignatureDoesNotMatch The request signature we calculated does not match the signature you provided. Check your key and signing method. [... then a bunch of details ...] * Ditto on a second try. I watched the timing closely this time, and it was only about a second between me tapping the link and the error message appearing. So it's not an expiration issue -- there's something else wrong. On further investigation, I have a partial diagnosis: * From that error page, I hit the "share" icon and chose "Copy", to get the URL onto the clipboard. Then went and pasted it elsewhere (the compose box, as the handiest place.) Here it is: `https://zulip-uploads.s3.amazonaws.com/1230/-Opc2L055IYelpraPb1oRDeU/sagas.pdf?Signature=2mpNGb0ysKFpVN8bkkMVsulQSVE%253D&Expires=1591317724&AWSAccessKeyId=AKIAIEVMBCAT2WD3M5KQ` * I went and pulled up the same upload in the webapp, to compare. (It works fine there.) Here's the URL I find in the location bar there: `https://zulip-uploads.s3.amazonaws.com/1230/-Opc2L055IYelpraPb1oRDeU/sagas.pdf?Signature=xipp%2FD69nk89xmkKha3cx6K%2FSSg%3D&Expires=1591317779&AWSAccessKeyId=AKIAIEVMBCAT2WD3M5KQ` * I think the problem is that `%253D`. That's the percent-encoding of `%3D`, which is itself the percent-encoding of `=`. Note there's a `%3D` at the end of the `Signature` query-parameter in the successful URL. Both signatures look like base64, which very often ends with a `=` as padding. * So it seems like we're double-encoding the URL, and as a result the decoding of it has a signature ending in `%3D` instead of in `=` and the signature doesn't validate. Looking at the code, it's clear where that's happening -- in `src/utils/openLink.js`, just in the iOS branch, we call `encodeURI` on the URL. That sure will turn a `%3D` into a `%253D`. Unfortunately it's going to be a bit trickier than just removing that call, because it was put there to fix another bug: 66a9e9d1a15e149ebdfa5a2e9cca06aaa3f29835 (#3507) fixed #3315. So it seems like we need to, in `openLink` on iOS before passing the URL to `SafariView.show`: * %-encode non-ASCII characters -- that's #3315; * but *not* %-encode `%` itself, instead leave it alone; * and presumably also leave alone all the other characters that `encodeURI` leaves alone, like `a` and `Z` and `/`; * and it's not clear if we should %-encode the remaining characters that `encodeURI` affects, like ` ` and `"` and a bunch of other punctuation. A key step in fixing this is going to be just end-to-end testing: make a URL filled with a ton of these characters, post it in a Zulip message, try following that link, and see what URL actually comes through.
priority
links get double encoded breaking downloads the effect of this is that a similar symptom to is still live on ios but the cause is unrelated so making this a separate issue originally described at and just before i merged fixing to reproduce went to message list found a message with an upload that wasn t an image to do that in the webapp searched for has attachment and scrolled through history to find one that wasn t an image specifically the file happened to be a pdf with a pdf extension on its name hit the link got a browser view but after a few seconds of loading the result was an error that appears to be from directly because that s in the location bar at the top the error message begins signaturedoesnotmatch the request signature we calculated does not match the signature you provided check your key and signing method ditto on a second try i watched the timing closely this time and it was only about a second between me tapping the link and the error message appearing so it s not an expiration issue there s something else wrong on further investigation i have a partial diagnosis from that error page i hit the share icon and chose copy to get the url onto the clipboard then went and pasted it elsewhere the compose box as the handiest place here it is i went and pulled up the same upload in the webapp to compare it works fine there here s the url i find in the location bar there i think the problem is that that s the percent encoding of which is itself the percent encoding of note there s a at the end of the signature query parameter in the successful url both signatures look like which very often ends with a as padding so it seems like we re double encoding the url and as a result the decoding of it has a signature ending in instead of in and the signature doesn t validate looking at the code it s clear where that s happening in src utils openlink js just in the ios branch we call encodeuri on the url that sure will turn a into a unfortunately it s going to be a bit trickier than just removing that call because it was put there to fix another bug fixed so it seems like we need to in openlink on ios before passing the url to safariview show encode non ascii characters that s but not encode itself instead leave it alone and presumably also leave alone all the other characters that encodeuri leaves alone like a and z and and it s not clear if we should encode the remaining characters that encodeuri affects like and and a bunch of other punctuation a key step in fixing this is going to be just end to end testing make a url filled with a ton of these characters post it in a zulip message try following that link and see what url actually comes through
1
354,048
10,562,312,275
IssuesEvent
2019-10-04 18:00:52
easybuilders/easybuild-framework
https://api.github.com/repos/easybuilders/easybuild-framework
closed
EB 4.0.0 breaks when there is another "eb" command before itself in the PATH despite being called with a full path
bug report priority:high
Setup: ``` grep robot-paths /hpc2n/eb/easybuild.cfg robot-paths=/hpc2n/eb/custom/easyconfigs:%(DEFAULT_ROBOT_PATHS)s ``` /hpc2n/eb -> /cvmfs/ebsw.hpc2n.umu.se/amd64_ubuntu1604_skx/ (Last part arch dependent) EB 3.9.4: ``` eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs, /hpc2n/eb/software/Core/EasyBuild/3.9.4/lib/python2.7/site-packages/easybuild_easyconfigs-3.9.4-py2.7.egg/easybuild/easyconfigs ``` EB 4.0.0: ``` eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs, /cvmfs/ebsw.hpc2n.umu.se/amd64_ubuntu1604_common/software/Core/EasyBuild/4.0.0/easybuild/easyconfigs ``` (Note that it also resolves the DEFAULT_ROBOT_PATHS which it shouldn't do) ``` cat /scratch/q/eb #!/bin/bash echo IIeeekkk, Im getting called ``` EB 3.9.4: ``` env PATH=/scratch/q:$PATH $EBROOTEASYBUILD/bin/eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs, /hpc2n/eb/software/Core/EasyBuild/3.9.4/lib/python2.7/site-packages/easybuild_easyconfigs-3.9.4-py2.7.egg/easybuild/easyconfigs ``` Still correct, but EB 4.0.0: ``` env PATH=/scratch/q:$PATH $EBROOTEASYBUILD/bin/eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs ``` This breaks our setup completely and I'm now stuck.
1.0
EB 4.0.0 breaks when there is another "eb" command before itself in the PATH despite being called with a full path - Setup: ``` grep robot-paths /hpc2n/eb/easybuild.cfg robot-paths=/hpc2n/eb/custom/easyconfigs:%(DEFAULT_ROBOT_PATHS)s ``` /hpc2n/eb -> /cvmfs/ebsw.hpc2n.umu.se/amd64_ubuntu1604_skx/ (Last part arch dependent) EB 3.9.4: ``` eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs, /hpc2n/eb/software/Core/EasyBuild/3.9.4/lib/python2.7/site-packages/easybuild_easyconfigs-3.9.4-py2.7.egg/easybuild/easyconfigs ``` EB 4.0.0: ``` eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs, /cvmfs/ebsw.hpc2n.umu.se/amd64_ubuntu1604_common/software/Core/EasyBuild/4.0.0/easybuild/easyconfigs ``` (Note that it also resolves the DEFAULT_ROBOT_PATHS which it shouldn't do) ``` cat /scratch/q/eb #!/bin/bash echo IIeeekkk, Im getting called ``` EB 3.9.4: ``` env PATH=/scratch/q:$PATH $EBROOTEASYBUILD/bin/eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs, /hpc2n/eb/software/Core/EasyBuild/3.9.4/lib/python2.7/site-packages/easybuild_easyconfigs-3.9.4-py2.7.egg/easybuild/easyconfigs ``` Still correct, but EB 4.0.0: ``` env PATH=/scratch/q:$PATH $EBROOTEASYBUILD/bin/eb --show-config robot-paths (F) = /hpc2n/eb/custom/easyconfigs ``` This breaks our setup completely and I'm now stuck.
priority
eb breaks when there is another eb command before itself in the path despite being called with a full path setup grep robot paths eb easybuild cfg robot paths eb custom easyconfigs default robot paths s eb cvmfs ebsw umu se skx last part arch dependent eb eb show config robot paths f eb custom easyconfigs eb software core easybuild lib site packages easybuild easyconfigs egg easybuild easyconfigs eb eb show config robot paths f eb custom easyconfigs cvmfs ebsw umu se common software core easybuild easybuild easyconfigs note that it also resolves the default robot paths which it shouldn t do cat scratch q eb bin bash echo iieeekkk im getting called eb env path scratch q path ebrooteasybuild bin eb show config robot paths f eb custom easyconfigs eb software core easybuild lib site packages easybuild easyconfigs egg easybuild easyconfigs still correct but eb env path scratch q path ebrooteasybuild bin eb show config robot paths f eb custom easyconfigs this breaks our setup completely and i m now stuck
1
503,481
14,592,697,194
IssuesEvent
2020-12-19 18:52:16
ClockGU/clock-frontend
https://api.github.com/repos/ClockGU/clock-frontend
closed
geplante Schichten wiederholen
:atom: Enhancement high priority
Wiederkehrende Schichten planen (wöchentlich, zweiwöchentlich, monatlich, ...).
1.0
geplante Schichten wiederholen - Wiederkehrende Schichten planen (wöchentlich, zweiwöchentlich, monatlich, ...).
priority
geplante schichten wiederholen wiederkehrende schichten planen wöchentlich zweiwöchentlich monatlich
1
581,193
17,287,883,619
IssuesEvent
2021-07-24 04:33:32
RoboJackets/apiary-mobile
https://api.github.com/repos/RoboJackets/apiary-mobile
opened
Add crash monitoring tool
area / devOps priority / high status / grooming type / feature
Crashlytics and Sentry Android are both seemingly good options, worth doing more research to decide
1.0
Add crash monitoring tool - Crashlytics and Sentry Android are both seemingly good options, worth doing more research to decide
priority
add crash monitoring tool crashlytics and sentry android are both seemingly good options worth doing more research to decide
1
40,652
2,868,934,149
IssuesEvent
2015-06-05 22:03:03
dart-lang/pub
https://api.github.com/repos/dart-lang/pub
closed
7zip: Cannot use absolute pathnames for this command
bug Fixed Priority-High
<a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)** _Originally opened as dart-lang/sdk#6840_ ---- From the mailing list (https://groups.google.com/a/dartlang.org/forum/?fromgroups#!topic/misc/PKznAUvoCKw): Hello I've cleaned my system and all dart related folders and caches and downloaded and extracted the latest version of Dart (Version 0.2.0.r13965, build 13965 SDK version 13983) to &quot;C:\dev\dart&quot; on a clean Windows 2003. Opened the &quot;todomvc&quot; example But Pub Install always complains with error: Running pub install ... Pub install fail, Resolving dependencies... Downloading web_components 0.2.0 from hosted... Downloading html5lib 0.0.13 from hosted... Downloading js 0.0.3 from hosted... Could not un-tar (exit code 7). Error: 7-Zip (A) 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18 Error: Cannot use absolute pathnames for this command A few others have chimed in with this problem too.
1.0
7zip: Cannot use absolute pathnames for this command - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)** _Originally opened as dart-lang/sdk#6840_ ---- From the mailing list (https://groups.google.com/a/dartlang.org/forum/?fromgroups#!topic/misc/PKznAUvoCKw): Hello I've cleaned my system and all dart related folders and caches and downloaded and extracted the latest version of Dart (Version 0.2.0.r13965, build 13965 SDK version 13983) to &quot;C:\dev\dart&quot; on a clean Windows 2003. Opened the &quot;todomvc&quot; example But Pub Install always complains with error: Running pub install ... Pub install fail, Resolving dependencies... Downloading web_components 0.2.0 from hosted... Downloading html5lib 0.0.13 from hosted... Downloading js 0.0.3 from hosted... Could not un-tar (exit code 7). Error: 7-Zip (A) 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18 Error: Cannot use absolute pathnames for this command A few others have chimed in with this problem too.
priority
cannot use absolute pathnames for this command issue by originally opened as dart lang sdk from the mailing list hello i ve cleaned my system and all dart related folders and caches and downloaded and extracted the latest version of dart version build sdk version to quot c dev dart quot on a clean windows opened the quot todomvc quot example but pub install always complains with error running pub install pub install fail resolving dependencies downloading web components from hosted downloading from hosted downloading js from hosted could not un tar exit code error zip a copyright c igor pavlov error cannot use absolute pathnames for this command a few others have chimed in with this problem too
1
383,968
11,372,174,771
IssuesEvent
2020-01-28 00:53:15
ShadowItaly/clas-digital
https://api.github.com/repos/ShadowItaly/clas-digital
closed
Ensure Google can index Book content, not "Could not load metadata sorry for that :( Could not load hit list sorry for that."
high priority
Search for "clas digital": * https://www.google.com/search?q=clas+digital One of the first results returned is: 1. https://www.clas-digital.uni-frankfurt.de/GetBooks.html?query=grasm%C3%BCcke&scanId=QH9C92KT&fuzzyness=0 Which shows up in the (Google) search result at: ``` [icon] clas-digital.www.uni-frankfurt.de › GetBooks clas digital https://www.clas-digital.uni-frankfurt.de/ Could not load metadata sorry for that :( Could not load hit list sorry for that. ``` The actual title of the book is: "Schubert (1830). Lehrbuch der Naturgeschichte für Schulen und zum Selbstunterrich" Problems here: 1. Title indexed by Google should be "Schubert (1830). Lehrbuch der …" not "clas digital" 2. Content indexed by Google should be "Singvögel: Arten von Grasmücken, (Sylvia) die ſich alle durch einen dünnen …" not "Could not load metadata sorry for that :( Could not load hit list sorry for that." 3. URL should be `/books/QH9C92KT?highlight=grasm%C3%BCcke` not `/GetBooks.html?query=grasm%C3%BCcke&scanId=QH9C92KT&fuzzyness=0`
1.0
Ensure Google can index Book content, not "Could not load metadata sorry for that :( Could not load hit list sorry for that." - Search for "clas digital": * https://www.google.com/search?q=clas+digital One of the first results returned is: 1. https://www.clas-digital.uni-frankfurt.de/GetBooks.html?query=grasm%C3%BCcke&scanId=QH9C92KT&fuzzyness=0 Which shows up in the (Google) search result at: ``` [icon] clas-digital.www.uni-frankfurt.de › GetBooks clas digital https://www.clas-digital.uni-frankfurt.de/ Could not load metadata sorry for that :( Could not load hit list sorry for that. ``` The actual title of the book is: "Schubert (1830). Lehrbuch der Naturgeschichte für Schulen und zum Selbstunterrich" Problems here: 1. Title indexed by Google should be "Schubert (1830). Lehrbuch der …" not "clas digital" 2. Content indexed by Google should be "Singvögel: Arten von Grasmücken, (Sylvia) die ſich alle durch einen dünnen …" not "Could not load metadata sorry for that :( Could not load hit list sorry for that." 3. URL should be `/books/QH9C92KT?highlight=grasm%C3%BCcke` not `/GetBooks.html?query=grasm%C3%BCcke&scanId=QH9C92KT&fuzzyness=0`
priority
ensure google can index book content not could not load metadata sorry for that could not load hit list sorry for that search for clas digital one of the first results returned is which shows up in the google search result at clas digital › getbooks clas digital could not load metadata sorry for that could not load hit list sorry for that the actual title of the book is schubert lehrbuch der naturgeschichte für schulen und zum selbstunterrich problems here title indexed by google should be schubert lehrbuch der … not clas digital content indexed by google should be singvögel arten von grasmücken sylvia die ſich alle durch einen dünnen … not could not load metadata sorry for that could not load hit list sorry for that url should be books highlight grasm bccke not getbooks html query grasm bccke scanid fuzzyness
1
379,919
11,244,251,041
IssuesEvent
2020-01-10 06:29:50
unitystation/unitystation
https://api.github.com/repos/unitystation/unitystation
closed
Shooting down a door doesn't make it passable for clients
Bug High Priority
## Description As per title. ![i](https://user-images.githubusercontent.com/10403536/71998420-a548b700-3250-11ea-85c6-d9c494a66ecd.gif) Serverplayer can go through the door alright. ### Steps to Reproduce Please enter the steps to reproduce the bug or behaviour: 1. Host in editor, join from build as a client 2. As a client, spawn a gun and shoot a door until it's destroyed 3. Try walking though it
1.0
Shooting down a door doesn't make it passable for clients - ## Description As per title. ![i](https://user-images.githubusercontent.com/10403536/71998420-a548b700-3250-11ea-85c6-d9c494a66ecd.gif) Serverplayer can go through the door alright. ### Steps to Reproduce Please enter the steps to reproduce the bug or behaviour: 1. Host in editor, join from build as a client 2. As a client, spawn a gun and shoot a door until it's destroyed 3. Try walking though it
priority
shooting down a door doesn t make it passable for clients description as per title serverplayer can go through the door alright steps to reproduce please enter the steps to reproduce the bug or behaviour host in editor join from build as a client as a client spawn a gun and shoot a door until it s destroyed try walking though it
1
382,618
11,309,215,084
IssuesEvent
2020-01-19 11:39:38
godotengine/godot
https://api.github.com/repos/godotengine/godot
closed
3.2 RC1 hangs on Windows, worked in beta 6 [GLES3] (regression from 796d35d)
bug high priority platform:windows regression topic:rendering
**Godot version:** 3.2 RC1 official (3.2 beta 6 works, is without the problem) **OS/device including version:** Windows 8.1 (64bit) AMD A8-6410 APU with AMD Radeon R5 Graphics Driver package version - 13.302.1501-140320a-169666C-Toshiba 2D Driver Version - 8.1.1.1634 OpenGL® Version - 25.20.15000.13547 OpenCL™ Version - 10.0.2766.5 AMD Mantle Version - 9.1.10.0295 AMD Mantle API Version - 102400 (Wasn't sure which are relevant, so copying more) **Issue description:** Whenever i try to open a project, i get a popup about AMD display driver that crashed and was recovered while in the meantime the editor during splash screen stops responding (when i try to close it i get the info visible below). The prompt shows basically nothing. ![obraz](https://user-images.githubusercontent.com/51792746/72622877-65ff2200-3944-11ea-82c3-76db629bb190.png) ![Crash](https://user-images.githubusercontent.com/51792746/72621293-43b7d500-3941-11ea-9798-85e2cac8dbb1.PNG) **Steps to reproduce:** Try to open any project while using AMD?
1.0
3.2 RC1 hangs on Windows, worked in beta 6 [GLES3] (regression from 796d35d) - **Godot version:** 3.2 RC1 official (3.2 beta 6 works, is without the problem) **OS/device including version:** Windows 8.1 (64bit) AMD A8-6410 APU with AMD Radeon R5 Graphics Driver package version - 13.302.1501-140320a-169666C-Toshiba 2D Driver Version - 8.1.1.1634 OpenGL® Version - 25.20.15000.13547 OpenCL™ Version - 10.0.2766.5 AMD Mantle Version - 9.1.10.0295 AMD Mantle API Version - 102400 (Wasn't sure which are relevant, so copying more) **Issue description:** Whenever i try to open a project, i get a popup about AMD display driver that crashed and was recovered while in the meantime the editor during splash screen stops responding (when i try to close it i get the info visible below). The prompt shows basically nothing. ![obraz](https://user-images.githubusercontent.com/51792746/72622877-65ff2200-3944-11ea-82c3-76db629bb190.png) ![Crash](https://user-images.githubusercontent.com/51792746/72621293-43b7d500-3941-11ea-9798-85e2cac8dbb1.PNG) **Steps to reproduce:** Try to open any project while using AMD?
priority
hangs on windows worked in beta regression from godot version official beta works is without the problem os device including version windows amd apu with amd radeon graphics driver package version toshiba driver version opengl® version opencl™ version amd mantle version amd mantle api version wasn t sure which are relevant so copying more issue description whenever i try to open a project i get a popup about amd display driver that crashed and was recovered while in the meantime the editor during splash screen stops responding when i try to close it i get the info visible below the prompt shows basically nothing steps to reproduce try to open any project while using amd
1
828,839
31,844,827,167
IssuesEvent
2023-09-14 19:03:57
NOAA-OWP/ras2fim
https://api.github.com/repos/NOAA-OWP/ras2fim
closed
[13pt] Update all ras2fim processing files to be linted except tools/nws*
enhancement ras2fim high priority
For each file in the ras2fim folder, not counting the tools/nws * files. Run black, isort and flake8. Also add those to the environment files and send out instructions on how to do it by hand for now (from command line). We can automate it later. Later we can do the nws files. Holding them back as they are a lower priority. See [155](https://github.com/NOAA-OWP/ras2fim/issues/155).
1.0
[13pt] Update all ras2fim processing files to be linted except tools/nws* - For each file in the ras2fim folder, not counting the tools/nws * files. Run black, isort and flake8. Also add those to the environment files and send out instructions on how to do it by hand for now (from command line). We can automate it later. Later we can do the nws files. Holding them back as they are a lower priority. See [155](https://github.com/NOAA-OWP/ras2fim/issues/155).
priority
update all processing files to be linted except tools nws for each file in the folder not counting the tools nws files run black isort and also add those to the environment files and send out instructions on how to do it by hand for now from command line we can automate it later later we can do the nws files holding them back as they are a lower priority see
1
122,011
4,826,554,443
IssuesEvent
2016-11-07 10:33:06
enviPath/enviPath
https://api.github.com/repos/enviPath/enviPath
closed
Malformed UTF-8 character in JSON string on compounds
high priority
Malformed UTF-8 character in JSON string at: - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/55a40171-5d5d-44e6-8bd1-0322fabb2c76 - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/df936535-9854-4cfb-9c60-d6bd3637f4bb - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/19a60c37-c47a-4f33-a37f-0c61cb6a5671 - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/5a69680c-718e-400d-ba90-cce08e7e6f3f - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/478c1208-6c05-417c-be3a-86dd72624379
1.0
Malformed UTF-8 character in JSON string on compounds - Malformed UTF-8 character in JSON string at: - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/55a40171-5d5d-44e6-8bd1-0322fabb2c76 - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/df936535-9854-4cfb-9c60-d6bd3637f4bb - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/19a60c37-c47a-4f33-a37f-0c61cb6a5671 - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/5a69680c-718e-400d-ba90-cce08e7e6f3f - https://envipath.org/package/3929925e-b71c-4c92-bc59-281a0b595cd2/compound/478c1208-6c05-417c-be3a-86dd72624379
priority
malformed utf character in json string on compounds malformed utf character in json string at
1
795,026
28,058,906,917
IssuesEvent
2023-03-29 11:16:43
AY2223S2-CS2113-T12-4/tp
https://api.github.com/repos/AY2223S2-CS2113-T12-4/tp
closed
[Task] Developer Guide
type.Task priority.High
**Describe the Task** Fill in the developer guide for respective features, including UML diagrams generated. **Additional context** Go to ```docs/DeveloperGuide.md``` to edit developer guide All the ```.puml``` files should be stored in ```docs/digrams``` folder. A ```style.puml``` which speficies the colors/styles of UML diagrams can be found in ```docs/disgrams```. It is adopted from https://github.com/se-edu/addressbook-level3/tree/master/docs/diagrams
1.0
[Task] Developer Guide - **Describe the Task** Fill in the developer guide for respective features, including UML diagrams generated. **Additional context** Go to ```docs/DeveloperGuide.md``` to edit developer guide All the ```.puml``` files should be stored in ```docs/digrams``` folder. A ```style.puml``` which speficies the colors/styles of UML diagrams can be found in ```docs/disgrams```. It is adopted from https://github.com/se-edu/addressbook-level3/tree/master/docs/diagrams
priority
developer guide describe the task fill in the developer guide for respective features including uml diagrams generated additional context go to docs developerguide md to edit developer guide all the puml files should be stored in docs digrams folder a style puml which speficies the colors styles of uml diagrams can be found in docs disgrams it is adopted from
1
136,315
5,280,096,241
IssuesEvent
2017-02-07 13:19:16
hpi-swt2/workshop-portal
https://api.github.com/repos/hpi-swt2/workshop-portal
closed
Visible custom fields for orga
High Priority needs acceptance team-hendrik
**As** oragnizer **I want to** be able to see what the applicant wrote in the custom fields when i click on the application details page **In order to** make the right choice AC - [ ] The details are shown in the list of input from the user on the application detail page
1.0
Visible custom fields for orga - **As** oragnizer **I want to** be able to see what the applicant wrote in the custom fields when i click on the application details page **In order to** make the right choice AC - [ ] The details are shown in the list of input from the user on the application detail page
priority
visible custom fields for orga as oragnizer i want to be able to see what the applicant wrote in the custom fields when i click on the application details page in order to make the right choice ac the details are shown in the list of input from the user on the application detail page
1
131,203
5,145,156,552
IssuesEvent
2017-01-12 20:45:23
BinPar/eBooks
https://api.github.com/repos/BinPar/eBooks
closed
Activar DTM para Real Academia Española
DTM Priority: High S2
Ahora mismo están sin acceso. Licencia: http://gestor.medicapanamericana.com/tables/license.aspx?id=770 Activar hasta 15/01/2018 Gracias
1.0
Activar DTM para Real Academia Española - Ahora mismo están sin acceso. Licencia: http://gestor.medicapanamericana.com/tables/license.aspx?id=770 Activar hasta 15/01/2018 Gracias
priority
activar dtm para real academia española ahora mismo están sin acceso licencia activar hasta gracias
1
706,036
24,258,019,046
IssuesEvent
2022-09-27 19:37:18
gammapy/gammapy
https://api.github.com/repos/gammapy/gammapy
closed
Gammapy validation: HESS DL3 DR1
effort-medium package-novice priority-high
As part of the Gammapy validation effort while preparing v1.0, we should script some of the H.E.S.S. data level 3, data release 1 analyses for validation. Note that large parts of Gammapy were rewritten since Gammapy v0.12 (May 2019) which was used for https://arxiv.org/pdf/1910.08088.pdf requiring the validation exercise to be re-done with the latest version of Gammapy, and from now on moving forward we'll maintain the validation scripts in the Gammapy team to avoid regressions. One concrete case of a regression that we already noticed and fixed in the meantime is https://github.com/gammapy/gammapy/pull/2367 . A task description and references are here: - https://github.com/gammapy/gammapy-benchmarks - https://github.com/gammapy/gammapy-benchmarks/blob/master/validation - https://github.com/gammapy/gammapy-benchmarks/tree/master/validation/hess-dl3-dr1 We're looking for help! If you can contribute, please leave a comment here, or contact me on Slack. For this, you don't have to be a Gammapy developer, what's required is some Python & Gammapy & IACT analysis experience, and at least ~ 2 full days to contribute in Nov 2019, although ~ 1 week in Nov & Dec 2019 is more realistic if you're willing to script the analysis for all targets.
1.0
Gammapy validation: HESS DL3 DR1 - As part of the Gammapy validation effort while preparing v1.0, we should script some of the H.E.S.S. data level 3, data release 1 analyses for validation. Note that large parts of Gammapy were rewritten since Gammapy v0.12 (May 2019) which was used for https://arxiv.org/pdf/1910.08088.pdf requiring the validation exercise to be re-done with the latest version of Gammapy, and from now on moving forward we'll maintain the validation scripts in the Gammapy team to avoid regressions. One concrete case of a regression that we already noticed and fixed in the meantime is https://github.com/gammapy/gammapy/pull/2367 . A task description and references are here: - https://github.com/gammapy/gammapy-benchmarks - https://github.com/gammapy/gammapy-benchmarks/blob/master/validation - https://github.com/gammapy/gammapy-benchmarks/tree/master/validation/hess-dl3-dr1 We're looking for help! If you can contribute, please leave a comment here, or contact me on Slack. For this, you don't have to be a Gammapy developer, what's required is some Python & Gammapy & IACT analysis experience, and at least ~ 2 full days to contribute in Nov 2019, although ~ 1 week in Nov & Dec 2019 is more realistic if you're willing to script the analysis for all targets.
priority
gammapy validation hess as part of the gammapy validation effort while preparing we should script some of the h e s s data level data release analyses for validation note that large parts of gammapy were rewritten since gammapy may which was used for requiring the validation exercise to be re done with the latest version of gammapy and from now on moving forward we ll maintain the validation scripts in the gammapy team to avoid regressions one concrete case of a regression that we already noticed and fixed in the meantime is a task description and references are here we re looking for help if you can contribute please leave a comment here or contact me on slack for this you don t have to be a gammapy developer what s required is some python gammapy iact analysis experience and at least full days to contribute in nov although week in nov dec is more realistic if you re willing to script the analysis for all targets
1
411,682
12,027,523,531
IssuesEvent
2020-04-12 18:47:54
ClaudiaLapalme/pikaroute
https://api.github.com/repos/ClaudiaLapalme/pikaroute
closed
US-20 As a user, I want to get written instructions on how to reach my indoor destination so that I can reach it easily
High Priority
The application should provide the user with the option of seeing written instructions on how to reach their indoor destination. Requirements: - [ ] The application should show the step by step instructions of the itinerary for indoor routes - [ ] If the route requires a mix of indoor and outdoor instructions, it provides both in the same window ![Group 42Indoor_Dir](https://user-images.githubusercontent.com/32103530/75602761-e8c3e280-5a95-11ea-8bc8-a152708b6a03.png) ![Group 40Indoor_Dir](https://user-images.githubusercontent.com/32103530/75602762-e8c3e280-5a95-11ea-99aa-e24f927dd5da.png)
1.0
US-20 As a user, I want to get written instructions on how to reach my indoor destination so that I can reach it easily - The application should provide the user with the option of seeing written instructions on how to reach their indoor destination. Requirements: - [ ] The application should show the step by step instructions of the itinerary for indoor routes - [ ] If the route requires a mix of indoor and outdoor instructions, it provides both in the same window ![Group 42Indoor_Dir](https://user-images.githubusercontent.com/32103530/75602761-e8c3e280-5a95-11ea-8bc8-a152708b6a03.png) ![Group 40Indoor_Dir](https://user-images.githubusercontent.com/32103530/75602762-e8c3e280-5a95-11ea-99aa-e24f927dd5da.png)
priority
us as a user i want to get written instructions on how to reach my indoor destination so that i can reach it easily the application should provide the user with the option of seeing written instructions on how to reach their indoor destination requirements the application should show the step by step instructions of the itinerary for indoor routes if the route requires a mix of indoor and outdoor instructions it provides both in the same window
1
761,830
26,698,517,072
IssuesEvent
2023-01-27 12:30:27
oceanprotocol/market
https://api.github.com/repos/oceanprotocol/market
closed
IPFS files can't be added
Type: Bug Priority: High
During publish, whenever I input a CID I get: <img width="909" alt="Screenshot 2023-01-24 at 09 55 53" src="https://user-images.githubusercontent.com/90316/214261739-b40a6d8b-aa49-490a-b26d-632b88462af1.png"> So not possible to publish IPFS files right now through our UI. Using this CID for testing: `QmTXDxopoBXtqxwtnxDPfC77g46AJtL3b2H8u7zAcveA8S`
1.0
IPFS files can't be added - During publish, whenever I input a CID I get: <img width="909" alt="Screenshot 2023-01-24 at 09 55 53" src="https://user-images.githubusercontent.com/90316/214261739-b40a6d8b-aa49-490a-b26d-632b88462af1.png"> So not possible to publish IPFS files right now through our UI. Using this CID for testing: `QmTXDxopoBXtqxwtnxDPfC77g46AJtL3b2H8u7zAcveA8S`
priority
ipfs files can t be added during publish whenever i input a cid i get img width alt screenshot at src so not possible to publish ipfs files right now through our ui using this cid for testing
1
545,572
15,953,155,267
IssuesEvent
2021-04-15 12:05:44
sopra-fs21-group-22/client
https://api.github.com/repos/sopra-fs21-group-22/client
closed
Role Information
high priority task
- [x] At any point of the game I want to be able to click a button and see which role I am and see a short description of what my goal is. - [x] I want to be able to see at all times who the sheriff is, and all the other roles should remain hidden ⏰ Time estimate: 3h 📌 This task is part of the user story #2.
1.0
Role Information - - [x] At any point of the game I want to be able to click a button and see which role I am and see a short description of what my goal is. - [x] I want to be able to see at all times who the sheriff is, and all the other roles should remain hidden ⏰ Time estimate: 3h 📌 This task is part of the user story #2.
priority
role information at any point of the game i want to be able to click a button and see which role i am and see a short description of what my goal is i want to be able to see at all times who the sheriff is and all the other roles should remain hidden ⏰ time estimate 📌 this task is part of the user story
1
28,469
2,702,909,215
IssuesEvent
2015-04-06 13:46:06
CenterForOpenScience/osf.io
https://api.github.com/repos/CenterForOpenScience/osf.io
closed
Failure to Render Unicode Characters
5 - Pending Review Bug: Production Community Priority - High
Steps ------- 1. Upload a txt file with unicode characters (UTF-8) 2. Attempt to view the file via the file renderer 3. Characters appear garbled. Expected ------------ Characters would render as they appear in the text file: ![screen shot 2015-04-02 at 11 24 28 am](https://cloud.githubusercontent.com/assets/7749914/6967172/b05102a2-d92b-11e4-9495-66a1600cad96.png) Actual -------- Characters do not: ![screen shot 2015-04-02 at 11 28 22 am](https://cloud.githubusercontent.com/assets/7749914/6967182/bd217fc0-d92b-11e4-9280-38980f09bac2.png) You can see my example here: https://osf.io/d93ih/ Reported on the help desk. @brianjgeiger indicates that the expected behavior is that the characters will render.
1.0
Failure to Render Unicode Characters - Steps ------- 1. Upload a txt file with unicode characters (UTF-8) 2. Attempt to view the file via the file renderer 3. Characters appear garbled. Expected ------------ Characters would render as they appear in the text file: ![screen shot 2015-04-02 at 11 24 28 am](https://cloud.githubusercontent.com/assets/7749914/6967172/b05102a2-d92b-11e4-9495-66a1600cad96.png) Actual -------- Characters do not: ![screen shot 2015-04-02 at 11 28 22 am](https://cloud.githubusercontent.com/assets/7749914/6967182/bd217fc0-d92b-11e4-9280-38980f09bac2.png) You can see my example here: https://osf.io/d93ih/ Reported on the help desk. @brianjgeiger indicates that the expected behavior is that the characters will render.
priority
failure to render unicode characters steps upload a txt file with unicode characters utf attempt to view the file via the file renderer characters appear garbled expected characters would render as they appear in the text file actual characters do not you can see my example here reported on the help desk brianjgeiger indicates that the expected behavior is that the characters will render
1
546,041
15,982,997,889
IssuesEvent
2021-04-18 07:10:25
r-lib/styler
https://api.github.com/repos/r-lib/styler
closed
Indenting leading spaces is inconsistent?
Complexity: Medium Priority: High Status: Unassigned Type: Bug
Hi, I noticed that when there is a certain number of leading spaces, the line does not indent to the correct number. For example, with `indent_by = 2` (though this issue appears for more than just `2` spaces), ### 4 leading spaces (for example): ```R function() { print("hi") } ``` does not indent to 2 spaces. However, ### 17 leading spaces (for example): ```R function() { print("hi") } ``` does indent to 2. Can anyone reproduce this, or does anyone know why this is? ### System info styler: 1.3.2 R: 4.0.4
1.0
Indenting leading spaces is inconsistent? - Hi, I noticed that when there is a certain number of leading spaces, the line does not indent to the correct number. For example, with `indent_by = 2` (though this issue appears for more than just `2` spaces), ### 4 leading spaces (for example): ```R function() { print("hi") } ``` does not indent to 2 spaces. However, ### 17 leading spaces (for example): ```R function() { print("hi") } ``` does indent to 2. Can anyone reproduce this, or does anyone know why this is? ### System info styler: 1.3.2 R: 4.0.4
priority
indenting leading spaces is inconsistent hi i noticed that when there is a certain number of leading spaces the line does not indent to the correct number for example with indent by though this issue appears for more than just spaces leading spaces for example r function print hi does not indent to spaces however leading spaces for example r function print hi does indent to can anyone reproduce this or does anyone know why this is system info styler r
1
248,950
7,947,295,646
IssuesEvent
2018-07-11 01:53:52
maxmagee/chick-fil-a
https://api.github.com/repos/maxmagee/chick-fil-a
closed
Background Color Doesn't Match Image Background Color on Intro Screens
bug high priority
The color of the main background doesn't match the color of the images on the second and third pages of the intro scroll view. I've been tweaking this for a while, but can't quite get it to work. The color picker on my Mac is yielding unpredictable results. I believe this might be related to this article and my use of Sketch to export these images: [StackExchange](https://graphicdesign.stackexchange.com/questions/53073/color-mismatching-on-sketch-exports). Tasks to address: - [x] Figure out why the colors are different. - [x] Create new images for the intro screen that aren't exported via Sketch to remain consistent. - [x] Recalculate the color values on a new sRGB document and adjust them in the application.
1.0
Background Color Doesn't Match Image Background Color on Intro Screens - The color of the main background doesn't match the color of the images on the second and third pages of the intro scroll view. I've been tweaking this for a while, but can't quite get it to work. The color picker on my Mac is yielding unpredictable results. I believe this might be related to this article and my use of Sketch to export these images: [StackExchange](https://graphicdesign.stackexchange.com/questions/53073/color-mismatching-on-sketch-exports). Tasks to address: - [x] Figure out why the colors are different. - [x] Create new images for the intro screen that aren't exported via Sketch to remain consistent. - [x] Recalculate the color values on a new sRGB document and adjust them in the application.
priority
background color doesn t match image background color on intro screens the color of the main background doesn t match the color of the images on the second and third pages of the intro scroll view i ve been tweaking this for a while but can t quite get it to work the color picker on my mac is yielding unpredictable results i believe this might be related to this article and my use of sketch to export these images tasks to address figure out why the colors are different create new images for the intro screen that aren t exported via sketch to remain consistent recalculate the color values on a new srgb document and adjust them in the application
1
548,778
16,075,436,531
IssuesEvent
2021-04-25 08:59:21
sopra-fs21-group-09/sopra-fs21-group-09-client
https://api.github.com/repos/sopra-fs21-group-09/sopra-fs21-group-09-client
closed
Create Group Component
Frontend component high priority task
Part of User Story #7 and #8 Quadratic boxes with group name in it. Estimate: 1 h
1.0
Create Group Component - Part of User Story #7 and #8 Quadratic boxes with group name in it. Estimate: 1 h
priority
create group component part of user story and quadratic boxes with group name in it estimate h
1
648,313
21,182,557,333
IssuesEvent
2022-04-08 09:23:21
CVUA-RRW/FooDMe
https://api.github.com/repos/CVUA-RRW/FooDMe
closed
Outdated BLAST+ version
high priority
BLAST now in version 2.13. Upgade required for future support of new identifiers.
1.0
Outdated BLAST+ version - BLAST now in version 2.13. Upgade required for future support of new identifiers.
priority
outdated blast version blast now in version upgade required for future support of new identifiers
1
260,205
8,204,967,811
IssuesEvent
2018-09-03 08:38:04
hpcugent/vsc_user_docs
https://api.github.com/repos/hpcugent/vsc_user_docs
closed
troubleshooting: module conflicts
Jasper (HPC-UGent student intern) priority:high
* modules that are loaded together must use same toolchain version * also same version of common dependencies, e.g. `Python` Example error message: ``` $ module load Python/2.7.14-intel-2018a $ module load HMMER/3.1b2-intel-2017a Lmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). You should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. Use 'ml spider HMMER' to get an overview of the available versions. If you don't understand the warning or error, contact the helpdesk at hpc@ugent.be While processing the following module(s): Module fullname Module Filename --------------- --------------- HMMER/3.1b2-intel-2017a /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua ``` The error message will be specific to sites using Lmod (certainly Ghent + Brussels).
1.0
troubleshooting: module conflicts - * modules that are loaded together must use same toolchain version * also same version of common dependencies, e.g. `Python` Example error message: ``` $ module load Python/2.7.14-intel-2018a $ module load HMMER/3.1b2-intel-2017a Lmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). You should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. Use 'ml spider HMMER' to get an overview of the available versions. If you don't understand the warning or error, contact the helpdesk at hpc@ugent.be While processing the following module(s): Module fullname Module Filename --------------- --------------- HMMER/3.1b2-intel-2017a /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua ``` The error message will be specific to sites using Lmod (certainly Ghent + Brussels).
priority
troubleshooting module conflicts modules that are loaded together must use same toolchain version also same version of common dependencies e g python example error message module load python intel module load hmmer intel lmod has detected the following error a different version of the intel module is already loaded see output of ml you should load another hmmer module for that is compatible with the currently loaded version of intel use ml spider hmmer to get an overview of the available versions if you don t understand the warning or error contact the helpdesk at hpc ugent be while processing the following module s module fullname module filename hmmer intel apps gent haswell ib modules all hmmer intel lua the error message will be specific to sites using lmod certainly ghent brussels
1
293,519
8,996,959,807
IssuesEvent
2019-02-02 06:48:29
sfdx-isv/sfdx-falcon
https://api.github.com/repos/sfdx-isv/sfdx-falcon
closed
CLI responses with invalid/unparseable JSON are misinterpreted by the logic in EXECUTOR:sfdx:executeSfdxCommand()
bug high-priority
It's possible for some CLI commands to exit from `shell.exec()` with an exit code of `0` but with a result that is actually an error/failure. It's also possible for commands to exit from `shell.exec()` with a non-zero exit code, but with a result that is not parseable. This problem seems especially likely with the `force:org:create` command. For example, one morning in January, there was a system-wide problem that caused most scratch org create requests to fail. The response from the CLI was an exit code of `0` coupled with this response to stdout: ``` '\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1G{"message":"The request to create a scratch org failed with error code: C-9999.","status":1,"stack":"RemoteOrgSignupFailed: The request to create a scratch org failed with error code: C-9999.\\n at force.retrieve.then (/Users/vchawla/.local/share/sfdx/client/node_modules/salesforce-alm/dist/lib/scratchOrgInfoApi.js:333:25)\\n at tryCatcher (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/util.js:16:23)\\n at Promise._settlePromiseFromHandler (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:510:31)\\n at Promise._settlePromise (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:567:18)\\n at Promise._settlePromise0 (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:612:10)\\n at Promise._settlePromises (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:691:18)\\n at Async._drainQueue (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/async.js:138:16)\\n at Async._drainQueues (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/async.js:148:10)\\n at Immediate.Async.drainQueues (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/async.js:17:14)\\n at runCallback (timers.js:789:20)\\n at tryOnImmediate (timers.js:751:5)\\n at processImmediate [as _immediateCallback] (timers.js:722:5)","name":"RemoteOrgSignupFailed","warnings":[]}\n' ``` A similar problem could be reproduced by adding an invalid org feature (eg. `Communities123` to a scratch org configuration file for example: ``` "features": [ "Communities123", "ContactsToMultipleAccounts", "PersonAccounts" ], ``` This results in a response like this to stdout: ``` '\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1G{"message":"Communitiesxxx is not a valid Features value.","status":1,"stack":"INVALID_INPUT: Communitiesxxx is not a valid Features value.\\n at HttpApi.getError (/Users/vchawla/.local/share/sfdx/client/node_modules/salesforce-alm/node_modules/jsforce/lib/http-api.js:250:13)\\n at /Users/vchawla/.local/share/sfdx/client/node_modules/salesforce-alm/node_modules/jsforce/lib/http-api.js:95:22\\n at tryCallOne (/Users/vchawla/.local/share/sfdx/client/node_modules/promise/lib/core.js:37:12)\\n at /Users/vchawla/.local/share/sfdx/client/node_modules/promise/lib/core.js:123:15\\n at flush (/Users/vchawla/.local/share/sfdx/client/node_modules/asap/raw.js:50:29)\\n at _combinedTickCallback (internal/process/next_tick.js:131:7)\\n at process._tickCallback (internal/process/next_tick.js:180:9)","name":"INVALID_INPUT","warnings":[]}\n' ``` This response is not parseable, though there is a parseable JSON response buried at the end underneath all of the repeating `Processing...` messages. We need to be able to detect improperly delivered failure messages from the CLI. We hope that errors come with a non-zero shell exit code, but even when they do there still may be unparseable data in the response. The fix for this should try to find and parse out valid JSON from the last part of stdout, ignoring all of the `Processing...` messages.
1.0
CLI responses with invalid/unparseable JSON are misinterpreted by the logic in EXECUTOR:sfdx:executeSfdxCommand() - It's possible for some CLI commands to exit from `shell.exec()` with an exit code of `0` but with a result that is actually an error/failure. It's also possible for commands to exit from `shell.exec()` with a non-zero exit code, but with a result that is not parseable. This problem seems especially likely with the `force:org:create` command. For example, one morning in January, there was a system-wide problem that caused most scratch org create requests to fail. The response from the CLI was an exit code of `0` coupled with this response to stdout: ``` '\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1G{"message":"The request to create a scratch org failed with error code: C-9999.","status":1,"stack":"RemoteOrgSignupFailed: The request to create a scratch org failed with error code: C-9999.\\n at force.retrieve.then (/Users/vchawla/.local/share/sfdx/client/node_modules/salesforce-alm/dist/lib/scratchOrgInfoApi.js:333:25)\\n at tryCatcher (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/util.js:16:23)\\n at Promise._settlePromiseFromHandler (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:510:31)\\n at Promise._settlePromise (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:567:18)\\n at Promise._settlePromise0 (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:612:10)\\n at Promise._settlePromises (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/promise.js:691:18)\\n at Async._drainQueue (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/async.js:138:16)\\n at Async._drainQueues (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/async.js:148:10)\\n at Immediate.Async.drainQueues (/Users/vchawla/.local/share/sfdx/client/node_modules/bluebird/js/release/async.js:17:14)\\n at runCallback (timers.js:789:20)\\n at tryOnImmediate (timers.js:751:5)\\n at processImmediate [as _immediateCallback] (timers.js:722:5)","name":"RemoteOrgSignupFailed","warnings":[]}\n' ``` A similar problem could be reproduced by adding an invalid org feature (eg. `Communities123` to a scratch org configuration file for example: ``` "features": [ "Communities123", "ContactsToMultipleAccounts", "PersonAccounts" ], ``` This results in a response like this to stdout: ``` '\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1GProcessing... |\u001b[2K\u001b[1GProcessing... /\u001b[2K\u001b[1GProcessing... -\u001b[2K\u001b[1GProcessing... \\\u001b[2K\u001b[1G{"message":"Communitiesxxx is not a valid Features value.","status":1,"stack":"INVALID_INPUT: Communitiesxxx is not a valid Features value.\\n at HttpApi.getError (/Users/vchawla/.local/share/sfdx/client/node_modules/salesforce-alm/node_modules/jsforce/lib/http-api.js:250:13)\\n at /Users/vchawla/.local/share/sfdx/client/node_modules/salesforce-alm/node_modules/jsforce/lib/http-api.js:95:22\\n at tryCallOne (/Users/vchawla/.local/share/sfdx/client/node_modules/promise/lib/core.js:37:12)\\n at /Users/vchawla/.local/share/sfdx/client/node_modules/promise/lib/core.js:123:15\\n at flush (/Users/vchawla/.local/share/sfdx/client/node_modules/asap/raw.js:50:29)\\n at _combinedTickCallback (internal/process/next_tick.js:131:7)\\n at process._tickCallback (internal/process/next_tick.js:180:9)","name":"INVALID_INPUT","warnings":[]}\n' ``` This response is not parseable, though there is a parseable JSON response buried at the end underneath all of the repeating `Processing...` messages. We need to be able to detect improperly delivered failure messages from the CLI. We hope that errors come with a non-zero shell exit code, but even when they do there still may be unparseable data in the response. The fix for this should try to find and parse out valid JSON from the last part of stdout, ignoring all of the `Processing...` messages.
priority
cli responses with invalid unparseable json are misinterpreted by the logic in executor sfdx executesfdxcommand it s possible for some cli commands to exit from shell exec with an exit code of but with a result that is actually an error failure it s also possible for commands to exit from shell exec with a non zero exit code but with a result that is not parseable this problem seems especially likely with the force org create command for example one morning in january there was a system wide problem that caused most scratch org create requests to fail the response from the cli was an exit code of coupled with this response to stdout timers js name remoteorgsignupfailed warnings n a similar problem could be reproduced by adding an invalid org feature eg to a scratch org configuration file for example features contactstomultipleaccounts personaccounts this results in a response like this to stdout n this response is not parseable though there is a parseable json response buried at the end underneath all of the repeating processing messages we need to be able to detect improperly delivered failure messages from the cli we hope that errors come with a non zero shell exit code but even when they do there still may be unparseable data in the response the fix for this should try to find and parse out valid json from the last part of stdout ignoring all of the processing messages
1
3,671
2,538,977,656
IssuesEvent
2015-01-27 11:53:15
newca12/gapt
https://api.github.com/repos/newca12/gapt
closed
non-deterministic bug, apparently in cleanStructuralRules
1 star bug imported Priority-High
_From [stefan.hetzl](https://code.google.com/u/stefan.hetzl/) on April 04, 2014 17:42:17_ I have been experiencing the following strange behaviour. During 'mvn install', a test failed as below. I tried running this test again several times using 'mvn -pl integration_tests/misc_test install' and sometimes this problem did occur, and sometimes it did not (i.e. all tests passed successfully). In fact, after around 15 tries, the test seems to fail with a probability of only 20-25&#37;. This looks to me like a bug in cleanStructuralRules which is sometimes exposed (and sometimes not) depending on the proof produced by reading back an expansion tree. The bug should be fixed, even if it appears only sometimes. Independently of that it would be nice to understand the cause of this non-deterministic behaviour. Bernhard, any thoughts on that? \------------------------------------------------------- T E S T S \------------------------------------------------------- Running at.logic.integration_tests.MiscTest Using OneVariableDelta. End sequent: P(0), ∀x.(P(x)⊃P(s(x))) :- P(s(s(s(s(0))))) Size of term set: 4 initializing generalized delta-table (set-based) Number of grammars: 2 Smallest grammar-size: 4 Number of smallest grammars: 2 log4j:WARN No appenders could be found for logger (at.logic.algorithms.cutIntroduction.CutIntroduction$). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Grammar chosen: {List(tuple1(s(α_0)), tuple1(α_0))} o {Set(List(0), List(s(s(0))))} Minimized cut formula: ∀x.(P(s(s(x)))∨¬P(x)) Warning: Axiom not uniquely specified, possible candidates: A21 ∀x:ι.=(bm(x:ι, 0:ι):ι, x:ι): o, A20 ∀x:ι.=(bm(0:ι, x:ι):ι, 0:ι): o Warning: Axiom not uniquely specified, possible candidates: A35 ∀x:ι.∀y:ι.(<(+(x:ι, 1:ι):ι, +(y:ι, 1:ι):ι): o⊃<(x:ι, y:ι): o), A17 ∀x:ι.∀z:ι.∀y:ι.(<(+(x:ι, z:ι):ι, +(y:ι, z:ι):ι): o⊃<(x:ι, y:ι): o) User specified sub works!{ Y -> (\lambda x ( - ((x + 1) \< x))) } Warning: Axiom not uniquely specified, possible candidates: IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o), IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o) Warning: Axiom not uniquely specified, possible candidates: A32 ∀x:ι.∀y:ι.(<(x:ι, y:ι): o⊃<(+(x:ι, 1:ι):ι, +(y:ι, 1:ι):ι): o), A36 ∀x:ι.∀z:ι.∀y:ι.(<(x:ι, y:ι): o⊃<(+(x:ι, z:ι):ι, +(y:ι, z:ι):ι): o) User specified sub works!{ Y -> (\lambda x (x \< (x + 1))) } Warning: Axiom not uniquely specified, possible candidates: IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o), IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o) wrong sub found! ((exists x_{16} (MON(h, 0) & NOCC(h, 0, \sigma))) & (all n ((exists x_{17} (MON(h, n) & NOCC(h, n, \sigma))) -> (exists x_{18} (MON(h, (n + 1)) & NOCC(h, (n + 1), \sigma)))))) -> (all n (exists x_{19} (MON(h, n) & NOCC(h, n, \sigma)))) for ((exists h (MON(h, 0) & NOCC(h, 0, \sigma))) & (all n ((exists h (MON(h, n) & NOCC(h, n, \sigma))) -> (exists h (MON(h, (n + 1)) & NOCC(h, (n + 1), \sigma)))))) -> (all n (exists h (MON(h, n) & NOCC(h, n, \sigma)))) sub={ h -> h } ax= ((exists h (MON(h, 0) & NOCC(h, 0, \sigma))) & (all n ((exists h (MON(h, n) & NOCC(h, n, \sigma))) -> (exists h (MON(h, (n + 1)) & NOCC(h, (n + 1), \sigma)))))) -> (all n (exists h (MON(h, n) & NOCC(h, n, \sigma)))) User specified sub works!{ Y -> (\lambda x (exists h (MON(h, x) & NOCC(h, x, \sigma)))) } Tests run: 12, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 8.468 sec \<<< FAILURE! - in at.logic.integration_tests.MiscTest The system should::Construct proof with expansion sequent extracted from proof 2/2(at.logic.integration_tests.MiscTest) Time elapsed: 0.11 sec \<<< ERROR! at.logic.calculi.lk.base.LKUnaryRuleCreationException: Could not create lk rule c:l from parent P(0), (P(0)⊃P(s(0))) :- P(s(0)) with auxiliary formulas P(0) at at.logic.calculi.lk.propositionalRules.ContractionLeftRule$.apply(propositionalRules.scala:266) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:49) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:53) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:53) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:234) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:53) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.apply(elimination.scala:23) at at.logic.algorithms.lk.solve$.startProving(solve.scala:66) at at.logic.algorithms.lk.solve$.expansionProofToLKProof(solve.scala:55) at at.logic.integration_tests.MiscTest$$anonfun$1$$anonfun$apply$42.apply(MiscTest.scala:259) at at.logic.integration_tests.MiscTest$$anonfun$1$$anonfun$apply$42.apply(MiscTest.scala:255) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire... _Original issue: http://code.google.com/p/gapt/issues/detail?id=264_
1.0
non-deterministic bug, apparently in cleanStructuralRules - _From [stefan.hetzl](https://code.google.com/u/stefan.hetzl/) on April 04, 2014 17:42:17_ I have been experiencing the following strange behaviour. During 'mvn install', a test failed as below. I tried running this test again several times using 'mvn -pl integration_tests/misc_test install' and sometimes this problem did occur, and sometimes it did not (i.e. all tests passed successfully). In fact, after around 15 tries, the test seems to fail with a probability of only 20-25&#37;. This looks to me like a bug in cleanStructuralRules which is sometimes exposed (and sometimes not) depending on the proof produced by reading back an expansion tree. The bug should be fixed, even if it appears only sometimes. Independently of that it would be nice to understand the cause of this non-deterministic behaviour. Bernhard, any thoughts on that? \------------------------------------------------------- T E S T S \------------------------------------------------------- Running at.logic.integration_tests.MiscTest Using OneVariableDelta. End sequent: P(0), ∀x.(P(x)⊃P(s(x))) :- P(s(s(s(s(0))))) Size of term set: 4 initializing generalized delta-table (set-based) Number of grammars: 2 Smallest grammar-size: 4 Number of smallest grammars: 2 log4j:WARN No appenders could be found for logger (at.logic.algorithms.cutIntroduction.CutIntroduction$). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Grammar chosen: {List(tuple1(s(α_0)), tuple1(α_0))} o {Set(List(0), List(s(s(0))))} Minimized cut formula: ∀x.(P(s(s(x)))∨¬P(x)) Warning: Axiom not uniquely specified, possible candidates: A21 ∀x:ι.=(bm(x:ι, 0:ι):ι, x:ι): o, A20 ∀x:ι.=(bm(0:ι, x:ι):ι, 0:ι): o Warning: Axiom not uniquely specified, possible candidates: A35 ∀x:ι.∀y:ι.(<(+(x:ι, 1:ι):ι, +(y:ι, 1:ι):ι): o⊃<(x:ι, y:ι): o), A17 ∀x:ι.∀z:ι.∀y:ι.(<(+(x:ι, z:ι):ι, +(y:ι, z:ι):ι): o⊃<(x:ι, y:ι): o) User specified sub works!{ Y -> (\lambda x ( - ((x + 1) \< x))) } Warning: Axiom not uniquely specified, possible candidates: IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o), IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o) Warning: Axiom not uniquely specified, possible candidates: A32 ∀x:ι.∀y:ι.(<(x:ι, y:ι): o⊃<(+(x:ι, 1:ι):ι, +(y:ι, 1:ι):ι): o), A36 ∀x:ι.∀z:ι.∀y:ι.(<(x:ι, y:ι): o⊃<(+(x:ι, z:ι):ι, +(y:ι, z:ι):ι): o) User specified sub works!{ Y -> (\lambda x (x \< (x + 1))) } Warning: Axiom not uniquely specified, possible candidates: IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o), IND ∀Y:(ι->ο).((Y(0:ι): o∧∀n:ι.(Y(n:ι): o⊃Y(+(n:ι, 1:ι):ι): o))⊃∀n:ι.Y(n:ι): o) wrong sub found! ((exists x_{16} (MON(h, 0) & NOCC(h, 0, \sigma))) & (all n ((exists x_{17} (MON(h, n) & NOCC(h, n, \sigma))) -> (exists x_{18} (MON(h, (n + 1)) & NOCC(h, (n + 1), \sigma)))))) -> (all n (exists x_{19} (MON(h, n) & NOCC(h, n, \sigma)))) for ((exists h (MON(h, 0) & NOCC(h, 0, \sigma))) & (all n ((exists h (MON(h, n) & NOCC(h, n, \sigma))) -> (exists h (MON(h, (n + 1)) & NOCC(h, (n + 1), \sigma)))))) -> (all n (exists h (MON(h, n) & NOCC(h, n, \sigma)))) sub={ h -> h } ax= ((exists h (MON(h, 0) & NOCC(h, 0, \sigma))) & (all n ((exists h (MON(h, n) & NOCC(h, n, \sigma))) -> (exists h (MON(h, (n + 1)) & NOCC(h, (n + 1), \sigma)))))) -> (all n (exists h (MON(h, n) & NOCC(h, n, \sigma)))) User specified sub works!{ Y -> (\lambda x (exists h (MON(h, x) & NOCC(h, x, \sigma)))) } Tests run: 12, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 8.468 sec \<<< FAILURE! - in at.logic.integration_tests.MiscTest The system should::Construct proof with expansion sequent extracted from proof 2/2(at.logic.integration_tests.MiscTest) Time elapsed: 0.11 sec \<<< ERROR! at.logic.calculi.lk.base.LKUnaryRuleCreationException: Could not create lk rule c:l from parent P(0), (P(0)⊃P(s(0))) :- P(s(0)) with auxiliary formulas P(0) at at.logic.calculi.lk.propositionalRules.ContractionLeftRule$.apply(propositionalRules.scala:266) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:49) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:53) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:53) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:234) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:53) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:114) at at.logic.algorithms.lk.CleanStructuralRules$.cleanStructuralRules(elimination.scala:43) at at.logic.algorithms.lk.CleanStructuralRules$.apply(elimination.scala:23) at at.logic.algorithms.lk.solve$.startProving(solve.scala:66) at at.logic.algorithms.lk.solve$.expansionProofToLKProof(solve.scala:55) at at.logic.integration_tests.MiscTest$$anonfun$1$$anonfun$apply$42.apply(MiscTest.scala:259) at at.logic.integration_tests.MiscTest$$anonfun$1$$anonfun$apply$42.apply(MiscTest.scala:255) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire... _Original issue: http://code.google.com/p/gapt/issues/detail?id=264_
priority
non deterministic bug apparently in cleanstructuralrules from on april i have been experiencing the following strange behaviour during mvn install a test failed as below i tried running this test again several times using mvn pl integration tests misc test install and sometimes this problem did occur and sometimes it did not i e all tests passed successfully in fact after around tries the test seems to fail with a probability of only this looks to me like a bug in cleanstructuralrules which is sometimes exposed and sometimes not depending on the proof produced by reading back an expansion tree the bug should be fixed even if it appears only sometimes independently of that it would be nice to understand the cause of this non deterministic behaviour bernhard any thoughts on that t e s t s running at logic integration tests misctest using onevariabledelta end sequent p ∀x p x ⊃p s x p s s s s size of term set initializing generalized delta table set based number of grammars smallest grammar size number of smallest grammars warn no appenders could be found for logger at logic algorithms cutintroduction cutintroduction warn please initialize the system properly warn see for more info grammar chosen list s α α o set list list s s minimized cut formula ∀x p s s x ∨¬p x warning axiom not uniquely specified possible candidates ∀x ι bm x ι ι ι x ι o ∀x ι bm ι x ι ι ι o warning axiom not uniquely specified possible candidates ∀x ι ∀y ι x ι ι ι y ι ι ι o⊃ x ι y ι o ∀x ι ∀z ι ∀y ι x ι z ι ι y ι z ι ι o⊃ x ι y ι o user specified sub works y lambda x x x warning axiom not uniquely specified possible candidates ind ∀y ι ο y ι o∧∀n ι y n ι o⊃y n ι ι ι o ⊃∀n ι y n ι o ind ∀y ι ο y ι o∧∀n ι y n ι o⊃y n ι ι ι o ⊃∀n ι y n ι o warning axiom not uniquely specified possible candidates ∀x ι ∀y ι x ι y ι o⊃ x ι ι ι y ι ι ι o ∀x ι ∀z ι ∀y ι x ι y ι o⊃ x ι z ι ι y ι z ι ι o user specified sub works y lambda x x x warning axiom not uniquely specified possible candidates ind ∀y ι ο y ι o∧∀n ι y n ι o⊃y n ι ι ι o ⊃∀n ι y n ι o ind ∀y ι ο y ι o∧∀n ι y n ι o⊃y n ι ι ι o ⊃∀n ι y n ι o wrong sub found exists x mon h nocc h sigma all n exists x mon h n nocc h n sigma exists x mon h n nocc h n sigma all n exists x mon h n nocc h n sigma for exists h mon h nocc h sigma all n exists h mon h n nocc h n sigma exists h mon h n nocc h n sigma all n exists h mon h n nocc h n sigma sub h h ax exists h mon h nocc h sigma all n exists h mon h n nocc h n sigma exists h mon h n nocc h n sigma all n exists h mon h n nocc h n sigma user specified sub works y lambda x exists h mon h x nocc h x sigma tests run failures errors skipped time elapsed sec failure in at logic integration tests misctest the system should construct proof with expansion sequent extracted from proof at logic integration tests misctest time elapsed sec error at logic calculi lk base lkunaryrulecreationexception could not create lk rule c l from parent p p ⊃p s p s with auxiliary formulas p at at logic calculi lk propositionalrules contractionleftrule apply propositionalrules scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules cleanstructuralrules elimination scala at at logic algorithms lk cleanstructuralrules apply elimination scala at at logic algorithms lk solve startproving solve scala at at logic algorithms lk solve expansionprooftolkproof solve scala at at logic integration tests misctest anonfun anonfun apply apply misctest scala at at logic integration tests misctest anonfun anonfun apply apply misctest scala at org apache maven surefire execute java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter invokeproviderinsameclassloader forkedbooter java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire original issue
1
402,406
11,809,292,368
IssuesEvent
2020-03-19 14:44:39
ayumi-cloud/oc-security-module
https://api.github.com/repos/ayumi-cloud/oc-security-module
opened
Non-updated firewall module results to use default options
Firewall Priority: High enhancement in-progress
### Enhancement idea - [ ] Non-updated firewall module results to use default options.
1.0
Non-updated firewall module results to use default options - ### Enhancement idea - [ ] Non-updated firewall module results to use default options.
priority
non updated firewall module results to use default options enhancement idea non updated firewall module results to use default options
1
90,262
3,813,488,956
IssuesEvent
2016-03-28 06:01:42
TehZarathustra/spectrum
https://api.github.com/repos/TehZarathustra/spectrum
closed
Rename address bar extentions
desktop High Priority Mobile tablets
![screen shot 2016-03-23 at 7 08 48 am](https://cloud.githubusercontent.com/assets/17010253/13987723/1fe6cdec-f0c6-11e5-97f8-79fe45c1113e.png) - About Us page: should be /about - Services Page(exterior) should be /services and all other according to their main titles
1.0
Rename address bar extentions - ![screen shot 2016-03-23 at 7 08 48 am](https://cloud.githubusercontent.com/assets/17010253/13987723/1fe6cdec-f0c6-11e5-97f8-79fe45c1113e.png) - About Us page: should be /about - Services Page(exterior) should be /services and all other according to their main titles
priority
rename address bar extentions about us page should be about services page exterior should be services and all other according to their main titles
1
545,917
15,977,533,962
IssuesEvent
2021-04-17 05:34:09
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
closed
Cannot update Devportal visibility and other required fields by Publisher User
API-M 4.0.0 Priority/High React-UI Severity/Critical Type/Bug
### Description: A publisher user cannot edit and save Devportal Visibility, GitHub url and Slack Url under Design configurations together with Subscription policies. The save button is disabled for the publisher user.
1.0
Cannot update Devportal visibility and other required fields by Publisher User - ### Description: A publisher user cannot edit and save Devportal Visibility, GitHub url and Slack Url under Design configurations together with Subscription policies. The save button is disabled for the publisher user.
priority
cannot update devportal visibility and other required fields by publisher user description a publisher user cannot edit and save devportal visibility github url and slack url under design configurations together with subscription policies the save button is disabled for the publisher user
1
714,555
24,566,267,822
IssuesEvent
2022-10-13 03:31:04
MLVETDevelopers/mlvet
https://api.github.com/repos/MLVETDevelopers/mlvet
closed
Out of sync video playback/scrubber bar/ transcription playback - spacebar triggered
bug high priority
When you click on the video and press spacebar this only causes the video to start playing and the scrubber bar and transcription playback to not do anything. When you press play on the UI then, all 3 are out of sync Uploading MLVET 2022-08-31 14-33-02.mp4…
1.0
Out of sync video playback/scrubber bar/ transcription playback - spacebar triggered - When you click on the video and press spacebar this only causes the video to start playing and the scrubber bar and transcription playback to not do anything. When you press play on the UI then, all 3 are out of sync Uploading MLVET 2022-08-31 14-33-02.mp4…
priority
out of sync video playback scrubber bar transcription playback spacebar triggered when you click on the video and press spacebar this only causes the video to start playing and the scrubber bar and transcription playback to not do anything when you press play on the ui then all are out of sync uploading mlvet …
1
350,646
10,499,850,502
IssuesEvent
2019-09-26 09:15:29
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
User roleList retrieved as empty String when there is a previous authentication session available.
Affected/5.7.0 Complexity/High Priority/High Type/Bug WUM
Steps to reproduce : - Enable the Analytics in wso2 IS [1] - Configure secondary user store (ABC.COM) - Configure tenant (abc.com) - Create a user in the secondary user store and in the tenant. - Configure the service provider and login to the application with username and password - copy the same URL in a different tab of the browser. This time roleList returned as an empty string as it tried to retrieve the role for the user ABC.COM/ABC.COM/alexaa@abc.com From the code base, when we tried to authenticate with the previous session it executes the logic [2]. Where we are able to see that, roleList returned with an empty string as authenticationData.getLocalUsername() return as follows: UserstoreDomain/username@tenantDomain [1]https://docs.wso2.com/display/IS570/Prerequisites+to+Publish+Statistics#PrerequisitestoPublishStatistics-Step02:EnableAnalyticsinWSO2IS [2] https://github.com/wso2-extensions/identity-data-publisher-authentication/blob/v5.1.8/components/org.wso2.carbon.identity.data.publisher.application.authentication/src/main/java/org/wso2/carbon/identity/data/publisher/application/authentication/impl/DASLoginDataPublisherImpl.java#L127
1.0
User roleList retrieved as empty String when there is a previous authentication session available. - Steps to reproduce : - Enable the Analytics in wso2 IS [1] - Configure secondary user store (ABC.COM) - Configure tenant (abc.com) - Create a user in the secondary user store and in the tenant. - Configure the service provider and login to the application with username and password - copy the same URL in a different tab of the browser. This time roleList returned as an empty string as it tried to retrieve the role for the user ABC.COM/ABC.COM/alexaa@abc.com From the code base, when we tried to authenticate with the previous session it executes the logic [2]. Where we are able to see that, roleList returned with an empty string as authenticationData.getLocalUsername() return as follows: UserstoreDomain/username@tenantDomain [1]https://docs.wso2.com/display/IS570/Prerequisites+to+Publish+Statistics#PrerequisitestoPublishStatistics-Step02:EnableAnalyticsinWSO2IS [2] https://github.com/wso2-extensions/identity-data-publisher-authentication/blob/v5.1.8/components/org.wso2.carbon.identity.data.publisher.application.authentication/src/main/java/org/wso2/carbon/identity/data/publisher/application/authentication/impl/DASLoginDataPublisherImpl.java#L127
priority
user rolelist retrieved as empty string when there is a previous authentication session available steps to reproduce enable the analytics in is configure secondary user store abc com configure tenant abc com create a user in the secondary user store and in the tenant configure the service provider and login to the application with username and password copy the same url in a different tab of the browser this time rolelist returned as an empty string as it tried to retrieve the role for the user abc com abc com alexaa abc com from the code base when we tried to authenticate with the previous session it executes the logic where we are able to see that rolelist returned with an empty string as authenticationdata getlocalusername return as follows userstoredomain username tenantdomain
1
353,764
10,558,368,241
IssuesEvent
2019-10-04 08:56:11
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
closed
Exception printed in logs when login to the store using a role with publisher role
3.0.0-alpha Priority/High Severity/Critical
Steps: 1. Create API and then publish it using a user with a publisher role. 2. go to the store and select Sign in button. Automatically sign in using the publisher user since he didn't log out from the publisher 3. When APIs are viewed following erros are shown in logs [2019-09-23 10:11:15,045] ERROR - ApiMgtDAO Could not load Subscriber records for: publisher [2019-09-23 10:11:15,047] ERROR - ApisApiServiceImpl Error while retrieving ratings for API 67bbbe4c-8b46-4782-9a48-7db290737be2 org.wso2.carbon.apimgt.api.APIManagementException: Could not load Subscriber records for: publisher at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating_aroundBody170(ApiMgtDAO.java:3590) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating(ApiMgtDAO.java:3577) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating_aroundBody168(ApiMgtDAO.java:3554) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating(ApiMgtDAO.java:3547) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.APIConsumerImpl.getUserRating_aroundBody48(APIConsumerImpl.java:1912) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.APIConsumerImpl.getUserRating(APIConsumerImpl.java:1911) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApisApiServiceImpl.apisApiIdRatingsGet(ApisApiServiceImpl.java:472) [classes/:?] at org.wso2.carbon.apimgt.rest.api.store.v1.ApisApi.apisApiIdRatingsGet(ApisApi.java:169) [classes/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161] When API is selected, the user gets signed out and gets redirected to the public page. We need to handle different user role scenarios
1.0
Exception printed in logs when login to the store using a role with publisher role - Steps: 1. Create API and then publish it using a user with a publisher role. 2. go to the store and select Sign in button. Automatically sign in using the publisher user since he didn't log out from the publisher 3. When APIs are viewed following erros are shown in logs [2019-09-23 10:11:15,045] ERROR - ApiMgtDAO Could not load Subscriber records for: publisher [2019-09-23 10:11:15,047] ERROR - ApisApiServiceImpl Error while retrieving ratings for API 67bbbe4c-8b46-4782-9a48-7db290737be2 org.wso2.carbon.apimgt.api.APIManagementException: Could not load Subscriber records for: publisher at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating_aroundBody170(ApiMgtDAO.java:3590) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating(ApiMgtDAO.java:3577) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating_aroundBody168(ApiMgtDAO.java:3554) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getUserRating(ApiMgtDAO.java:3547) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.APIConsumerImpl.getUserRating_aroundBody48(APIConsumerImpl.java:1912) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.impl.APIConsumerImpl.getUserRating(APIConsumerImpl.java:1911) ~[org.wso2.carbon.apimgt.impl_6.5.121.jar:?] at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApisApiServiceImpl.apisApiIdRatingsGet(ApisApiServiceImpl.java:472) [classes/:?] at org.wso2.carbon.apimgt.rest.api.store.v1.ApisApi.apisApiIdRatingsGet(ApisApi.java:169) [classes/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161] When API is selected, the user gets signed out and gets redirected to the public page. We need to handle different user role scenarios
priority
exception printed in logs when login to the store using a role with publisher role steps create api and then publish it using a user with a publisher role go to the store and select sign in button automatically sign in using the publisher user since he didn t log out from the publisher when apis are viewed following erros are shown in logs error apimgtdao could not load subscriber records for publisher error apisapiserviceimpl error while retrieving ratings for api org carbon apimgt api apimanagementexception could not load subscriber records for publisher at org carbon apimgt impl dao apimgtdao getuserrating apimgtdao java at org carbon apimgt impl dao apimgtdao getuserrating apimgtdao java at org carbon apimgt impl dao apimgtdao getuserrating apimgtdao java at org carbon apimgt impl dao apimgtdao getuserrating apimgtdao java at org carbon apimgt impl apiconsumerimpl getuserrating apiconsumerimpl java at org carbon apimgt impl apiconsumerimpl getuserrating apiconsumerimpl java at org carbon apimgt rest api store impl apisapiserviceimpl apisapiidratingsget apisapiserviceimpl java at org carbon apimgt rest api store apisapi apisapiidratingsget apisapi java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java when api is selected the user gets signed out and gets redirected to the public page we need to handle different user role scenarios
1
655,290
21,684,169,119
IssuesEvent
2022-05-09 09:37:13
oceanprotocol/docs
https://api.github.com/repos/oceanprotocol/docs
closed
Lots of 404
Type: Bug Priority: High
Based on Netlify stats, there were quite a lot of 404's within last 30 days alone: <img width="621" alt="Screen Shot 2022-04-13 at 11 58 17" src="https://user-images.githubusercontent.com/90316/163166130-38228c01-d69b-4781-b715-959fb55f3225.png"> Please check all of them, and as usual when changing URLs, make sure to add proper redirects, ideally 301 but user being redirected is most important. This has to be done ALWAYS when renaming actual URL paths which seem to have happened a lot in last months. This might help for getting redirects done within the normal authoring flow: https://github.com/kremalicious/gatsby-redirect-from You can of course ignore the php stuff (people trying to hack us assuming we use WordPress) and `.well-known/` (Let's Encrypt SSL certificates)
1.0
Lots of 404 - Based on Netlify stats, there were quite a lot of 404's within last 30 days alone: <img width="621" alt="Screen Shot 2022-04-13 at 11 58 17" src="https://user-images.githubusercontent.com/90316/163166130-38228c01-d69b-4781-b715-959fb55f3225.png"> Please check all of them, and as usual when changing URLs, make sure to add proper redirects, ideally 301 but user being redirected is most important. This has to be done ALWAYS when renaming actual URL paths which seem to have happened a lot in last months. This might help for getting redirects done within the normal authoring flow: https://github.com/kremalicious/gatsby-redirect-from You can of course ignore the php stuff (people trying to hack us assuming we use WordPress) and `.well-known/` (Let's Encrypt SSL certificates)
priority
lots of based on netlify stats there were quite a lot of s within last days alone img width alt screen shot at src please check all of them and as usual when changing urls make sure to add proper redirects ideally but user being redirected is most important this has to be done always when renaming actual url paths which seem to have happened a lot in last months this might help for getting redirects done within the normal authoring flow you can of course ignore the php stuff people trying to hack us assuming we use wordpress and well known let s encrypt ssl certificates
1
641,619
20,830,672,668
IssuesEvent
2022-03-19 11:34:57
AY2122S2-CS2113T-T09-2/tp
https://api.github.com/repos/AY2122S2-CS2113T-T09-2/tp
closed
Implement Unique Identifiers for Workouts and Plans
type.Task priority.High
Currently, workouts are referenced by their relative indexes as seen in `workout /list`. This can be potentially an issue as the plans and schedule features are implemented. **Example of Issue** Assume our current workout list is as follows: ``` 1. push up (10 reps) 2. lunge (20 reps) 3. russian twist (1000 reps) 4. squat (30 reps) ``` Now, suppose the user creates a plan that references indexes 1, 2, and 4 as seen from `workout /list`. Thus, the plan will look something like this: ``` 1. push up (10 reps) 2. lunge (20 reps) 4. squat (30 reps) ``` > Note: This does not reflect the final visualisation of the 'plans' feature. Next, suppose the user decides to delete Workout 3 (Russian twist with 1000 reps). So, the list will now look something like this: ``` 1. push up (10 reps) 2. lunge (20 reps) 3. squat (30 reps) ``` This becomes an issue because the plan that was created references indexes 1, 2, and 4, but index 4 no longer exists, resulting in inconsistencies. Thus, this is a motivation for workouts and plans to have unique identifiers so that plans can reference to unique identifiers of workouts. Likewise, a schedule can reference to unique identifiers of plans, instead of relative indexes that may change.
1.0
Implement Unique Identifiers for Workouts and Plans - Currently, workouts are referenced by their relative indexes as seen in `workout /list`. This can be potentially an issue as the plans and schedule features are implemented. **Example of Issue** Assume our current workout list is as follows: ``` 1. push up (10 reps) 2. lunge (20 reps) 3. russian twist (1000 reps) 4. squat (30 reps) ``` Now, suppose the user creates a plan that references indexes 1, 2, and 4 as seen from `workout /list`. Thus, the plan will look something like this: ``` 1. push up (10 reps) 2. lunge (20 reps) 4. squat (30 reps) ``` > Note: This does not reflect the final visualisation of the 'plans' feature. Next, suppose the user decides to delete Workout 3 (Russian twist with 1000 reps). So, the list will now look something like this: ``` 1. push up (10 reps) 2. lunge (20 reps) 3. squat (30 reps) ``` This becomes an issue because the plan that was created references indexes 1, 2, and 4, but index 4 no longer exists, resulting in inconsistencies. Thus, this is a motivation for workouts and plans to have unique identifiers so that plans can reference to unique identifiers of workouts. Likewise, a schedule can reference to unique identifiers of plans, instead of relative indexes that may change.
priority
implement unique identifiers for workouts and plans currently workouts are referenced by their relative indexes as seen in workout list this can be potentially an issue as the plans and schedule features are implemented example of issue assume our current workout list is as follows push up reps lunge reps russian twist reps squat reps now suppose the user creates a plan that references indexes and as seen from workout list thus the plan will look something like this push up reps lunge reps squat reps note this does not reflect the final visualisation of the plans feature next suppose the user decides to delete workout russian twist with reps so the list will now look something like this push up reps lunge reps squat reps this becomes an issue because the plan that was created references indexes and but index no longer exists resulting in inconsistencies thus this is a motivation for workouts and plans to have unique identifiers so that plans can reference to unique identifiers of workouts likewise a schedule can reference to unique identifiers of plans instead of relative indexes that may change
1
463,066
13,259,049,936
IssuesEvent
2020-08-20 16:10:56
rstudio/plumber
https://api.github.com/repos/rstudio/plumber
closed
`removeNAOrNulls` breaks on no swagger spec
difficulty: novice effort: low help wanted priority: high
From #417 ```r pr <- plumber$new() pr$handle("GET", "/:path/here", function(){}) pr$run( port = 1234, swagger = function(pr_, spec, ...) { spec$info$title <- Sys.time() spec } ) ``` Visiting http://127.0.0.1:1234/openapi.json causes an error ``` <simpleError in if (any(toRemove)) { x[toRemove] <- NULL}: missing value where TRUE/FALSE needed> ```
1.0
`removeNAOrNulls` breaks on no swagger spec - From #417 ```r pr <- plumber$new() pr$handle("GET", "/:path/here", function(){}) pr$run( port = 1234, swagger = function(pr_, spec, ...) { spec$info$title <- Sys.time() spec } ) ``` Visiting http://127.0.0.1:1234/openapi.json causes an error ``` <simpleError in if (any(toRemove)) { x[toRemove] <- NULL}: missing value where TRUE/FALSE needed> ```
priority
removenaornulls breaks on no swagger spec from r pr plumber new pr handle get path here function pr run port swagger function pr spec spec info title sys time spec visiting causes an error
1
501,103
14,521,200,800
IssuesEvent
2020-12-14 06:58:02
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
lms.autozone.com - video or audio doesn't play
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal
<!-- @browser: Firefox 68.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/63587 --> **URL**: https://lms.autozone.com/Kview/CustomCodeBehind/base/courseware/scorm/scorm12courseframe.aspx **Browser / Version**: Firefox 68.0 **Operating System**: Linux **Tested Another Browser**: Yes Internet Explorer **Problem type**: Video or audio doesn't play **Description**: The video or audio does not play **Steps to Reproduce**: <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/c4e57145-1cf0-4ce2-b1bd-61c061a6759f.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200409090751</li><li>channel: default</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/12/a8764244-afe9-4e7b-a2cd-e1aa14f6ad1c) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
lms.autozone.com - video or audio doesn't play - <!-- @browser: Firefox 68.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/63587 --> **URL**: https://lms.autozone.com/Kview/CustomCodeBehind/base/courseware/scorm/scorm12courseframe.aspx **Browser / Version**: Firefox 68.0 **Operating System**: Linux **Tested Another Browser**: Yes Internet Explorer **Problem type**: Video or audio doesn't play **Description**: The video or audio does not play **Steps to Reproduce**: <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/c4e57145-1cf0-4ce2-b1bd-61c061a6759f.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200409090751</li><li>channel: default</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/12/a8764244-afe9-4e7b-a2cd-e1aa14f6ad1c) _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
lms autozone com video or audio doesn t play url browser version firefox operating system linux tested another browser yes internet explorer problem type video or audio doesn t play description the video or audio does not play steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel default hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
1
390,983
11,566,779,332
IssuesEvent
2020-02-20 13:10:09
LiskHQ/lisk-desktop
https://api.github.com/repos/LiskHQ/lisk-desktop
closed
Delegates Vote weight are incorrect in the delegates tab (beta version)
priority: high type: bug type: unplanned
Delegates Vote weights are incorrect in the delegates tab (beta version) I unvoted someone from pool(101) and voted myself in.. the desktop shows voteweight 794 which is incorrect.. the core API shows { "meta": { "offset": 0, "limit": 10 }, "data": [ { "username": "subzero", "voteWeight": "9900079293400000", "rewards": "0", "producedBlocks": 0, "missedBlocks": 0, "productivity": 0, "account": { "address": "5553317242494141914L", "publicKey": "eb06e0a8cbb848f81f126b538794eb122ae8035917ded1da3e5c85618602f3ba", "secondPublicKey": "" }, "approval": 98.8 } ], "links": {} }
1.0
Delegates Vote weight are incorrect in the delegates tab (beta version) - Delegates Vote weights are incorrect in the delegates tab (beta version) I unvoted someone from pool(101) and voted myself in.. the desktop shows voteweight 794 which is incorrect.. the core API shows { "meta": { "offset": 0, "limit": 10 }, "data": [ { "username": "subzero", "voteWeight": "9900079293400000", "rewards": "0", "producedBlocks": 0, "missedBlocks": 0, "productivity": 0, "account": { "address": "5553317242494141914L", "publicKey": "eb06e0a8cbb848f81f126b538794eb122ae8035917ded1da3e5c85618602f3ba", "secondPublicKey": "" }, "approval": 98.8 } ], "links": {} }
priority
delegates vote weight are incorrect in the delegates tab beta version delegates vote weights are incorrect in the delegates tab beta version i unvoted someone from pool and voted myself in the desktop shows voteweight which is incorrect the core api shows meta offset limit data username subzero voteweight rewards producedblocks missedblocks productivity account address publickey secondpublickey approval links
1
550,236
16,107,752,931
IssuesEvent
2021-04-27 16:54:32
refgenie/refgenconf
https://api.github.com/repos/refgenie/refgenconf
closed
tags in `RefGenConf.populate`?
likely-solved priority-high question
It appears that `RefGenConf.populate` does not respect tags. Note the `:test` string appended to the output path below. ```python In [1]: from refgenconf import RefGenConf In [2]: rgc = RefGenConf("g.yml") In [3]: rgc.populate(glob="refgenie://rCRSd/fasta:test") Out[3]: '/Users/mstolarczyk/Desktop/testing/refgenie/alias/rCRSd/fasta/default/rCRSd.fa:test' ``` also: https://github.com/refgenie/refgenconf/runs/2270085297#step:8:59 Is that intentional? Changing the regex from `"refgenie://([A-Za-z0-9_/\.]+)?"` to `"refgenie://([A-Za-z0-9_/\.\:]+)?"` solves the issue.
1.0
tags in `RefGenConf.populate`? - It appears that `RefGenConf.populate` does not respect tags. Note the `:test` string appended to the output path below. ```python In [1]: from refgenconf import RefGenConf In [2]: rgc = RefGenConf("g.yml") In [3]: rgc.populate(glob="refgenie://rCRSd/fasta:test") Out[3]: '/Users/mstolarczyk/Desktop/testing/refgenie/alias/rCRSd/fasta/default/rCRSd.fa:test' ``` also: https://github.com/refgenie/refgenconf/runs/2270085297#step:8:59 Is that intentional? Changing the regex from `"refgenie://([A-Za-z0-9_/\.]+)?"` to `"refgenie://([A-Za-z0-9_/\.\:]+)?"` solves the issue.
priority
tags in refgenconf populate it appears that refgenconf populate does not respect tags note the test string appended to the output path below python in from refgenconf import refgenconf in rgc refgenconf g yml in rgc populate glob refgenie rcrsd fasta test out users mstolarczyk desktop testing refgenie alias rcrsd fasta default rcrsd fa test also is that intentional changing the regex from refgenie to refgenie solves the issue
1
789,773
27,805,216,945
IssuesEvent
2023-03-17 19:12:31
PrefectHQ/prefect-ui-library
https://api.github.com/repos/PrefectHQ/prefect-ui-library
closed
Block create/edit forms don't allow for the selection of existing blocks
bug priority:high
When creating or editing blocks, the fields for the block are shown, but the ability to choose an existing one is not shown. This is problematic for block types with optional attributes that are block types, but the referenced block types have required fields — preventing the form's submission. An example is the `KubernetesJob`/`KubernetesClusterConfig` relationship (screenshot below) since a user doesn't need to provide a cluster config if they run their agent within a cluster. ![Screenshot 2023-03-17 at 8 11 42 AM](https://user-images.githubusercontent.com/12350579/225914358-e671179e-6984-41c3-9cd4-626b157a38ad.png) I think this behavior was introduced in https://github.com/PrefectHQ/prefect-ui-library/pull/1222.
1.0
Block create/edit forms don't allow for the selection of existing blocks - When creating or editing blocks, the fields for the block are shown, but the ability to choose an existing one is not shown. This is problematic for block types with optional attributes that are block types, but the referenced block types have required fields — preventing the form's submission. An example is the `KubernetesJob`/`KubernetesClusterConfig` relationship (screenshot below) since a user doesn't need to provide a cluster config if they run their agent within a cluster. ![Screenshot 2023-03-17 at 8 11 42 AM](https://user-images.githubusercontent.com/12350579/225914358-e671179e-6984-41c3-9cd4-626b157a38ad.png) I think this behavior was introduced in https://github.com/PrefectHQ/prefect-ui-library/pull/1222.
priority
block create edit forms don t allow for the selection of existing blocks when creating or editing blocks the fields for the block are shown but the ability to choose an existing one is not shown this is problematic for block types with optional attributes that are block types but the referenced block types have required fields — preventing the form s submission an example is the kubernetesjob kubernetesclusterconfig relationship screenshot below since a user doesn t need to provide a cluster config if they run their agent within a cluster i think this behavior was introduced in
1
189,829
6,802,145,932
IssuesEvent
2017-11-02 19:08:12
dmwm/WMCore
https://api.github.com/repos/dmwm/WMCore
closed
Update MaxRSS when Memory changes in ACDC
High Priority
Regarding this patch: https://github.com/dmwm/WMCore/pull/8204 I missed one detail, when someone creates and ACDC with a different Memory value (compared to the original wf), we need to trigger the MaxRSS update as well. I'll try to get it fixed for tomorrow.
1.0
Update MaxRSS when Memory changes in ACDC - Regarding this patch: https://github.com/dmwm/WMCore/pull/8204 I missed one detail, when someone creates and ACDC with a different Memory value (compared to the original wf), we need to trigger the MaxRSS update as well. I'll try to get it fixed for tomorrow.
priority
update maxrss when memory changes in acdc regarding this patch i missed one detail when someone creates and acdc with a different memory value compared to the original wf we need to trigger the maxrss update as well i ll try to get it fixed for tomorrow
1
52,939
3,031,595,810
IssuesEvent
2015-08-05 00:14:20
washingtontrails/vms
https://api.github.com/repos/washingtontrails/vms
opened
VMS WP Reminder email template includes broken link to Docusign
Bug High Priority Salesforce
When testing the SF Email Reminder Template: https://cs9.salesforce.com/00XK0000000Mg7E the email contains a broken hyperlink to the Docusign Document. It looks like this in my test email: Required Documents Please complete the Parental Release: https://demo.docusign.net/Member/PowerFormSigning.aspx?PowerFormId=7cdfa327-2f53-4d51-90c9-c7b32622dfb5&EnvelopeField_Youth Id=003K0000010IT9EIAW&Participant Name=Charlie Kahle&WP Name=Beacon Rock Trail Note the space between "Envelope Field_Youth" and "Id=003K"... That is where the hyperlink stops.
1.0
VMS WP Reminder email template includes broken link to Docusign - When testing the SF Email Reminder Template: https://cs9.salesforce.com/00XK0000000Mg7E the email contains a broken hyperlink to the Docusign Document. It looks like this in my test email: Required Documents Please complete the Parental Release: https://demo.docusign.net/Member/PowerFormSigning.aspx?PowerFormId=7cdfa327-2f53-4d51-90c9-c7b32622dfb5&EnvelopeField_Youth Id=003K0000010IT9EIAW&Participant Name=Charlie Kahle&WP Name=Beacon Rock Trail Note the space between "Envelope Field_Youth" and "Id=003K"... That is where the hyperlink stops.
priority
vms wp reminder email template includes broken link to docusign when testing the sf email reminder template the email contains a broken hyperlink to the docusign document it looks like this in my test email required documents please complete the parental release id participant name charlie kahle wp name beacon rock trail note the space between envelope field youth and id that is where the hyperlink stops
1
617,089
19,342,416,169
IssuesEvent
2021-12-15 06:59:23
matrixorigin/matrixone
https://api.github.com/repos/matrixorigin/matrixone
closed
Behavior of Float type precision differs from MySQL
kind/bug priority/high needs-triage severity/moderate
<!-- Please describe your issue in English. --> #### Can be reproduced ? Yes. #### Steps: create table t_float(a float, b float(54)); insert into t_float values (123456789, 12345678.123456789); insert into t_float values (1.23456789, 1.23456789); select * from t_float; #### Expected behavior: There are two different behaviors when comparing against MySQL: 1. According to MySQL doc, for FLOAT(p), the precision value 'p' is used only to determine storage size. A precision from 0 to 23 results in a 4-byte single-precision FLOAT column. A precision from 24 to 53 results in an 8-byte double-precision DOUBLE column. Due to the above rule, if try to define a FLOAT type column with a precision value is 54, then MySQL will issue an error. Please see below: mysql> create table t_float(a float, b float(54)); ERROR 1063 (42000): Incorrect column specifier for column 'b' In contrast to that, Matrixone lets the same SQL run successfully. We need to decide if apply the same rule to the precision value as MySQL does. 2. The retrieved value is different. As in MySQL, the default precision is 6 bits. For example, assuming that 123456789 was inserted into MySQL, it returns 123457000 when query, whereas Matrixone returns 123456792.0000 instead. #### Actual behavior: mysql> create table t_float(a float, b float(54)); Query OK, 0 rows affected (0.28 sec) mysql> insert into t_float values (123456789, 12345678.123456789); Query OK, 0 rows affected (0.08 sec) mysql> insert into t_float values (1.23456789, 1.23456789); Query OK, 0 rows affected (0.04 sec) mysql> select * from t_float; +----------------+---------------+ | a | b | +----------------+---------------+ | 123456792.0000 | 12345678.0000 | | 1.2346 | 1.2346 | +----------------+---------------+ 2 rows in set (0.01 sec) #### Environment: - Version or commit-id (e.g. v0.1.0 or 8b23a93):commit 06261a26e804acad91414ef9669aea9378bc7996 - Hardware parameters: - OS type: - Others: #### Configuration file: #### Additional context: - Error message from client: - Server log: - Other information:
1.0
Behavior of Float type precision differs from MySQL - <!-- Please describe your issue in English. --> #### Can be reproduced ? Yes. #### Steps: create table t_float(a float, b float(54)); insert into t_float values (123456789, 12345678.123456789); insert into t_float values (1.23456789, 1.23456789); select * from t_float; #### Expected behavior: There are two different behaviors when comparing against MySQL: 1. According to MySQL doc, for FLOAT(p), the precision value 'p' is used only to determine storage size. A precision from 0 to 23 results in a 4-byte single-precision FLOAT column. A precision from 24 to 53 results in an 8-byte double-precision DOUBLE column. Due to the above rule, if try to define a FLOAT type column with a precision value is 54, then MySQL will issue an error. Please see below: mysql> create table t_float(a float, b float(54)); ERROR 1063 (42000): Incorrect column specifier for column 'b' In contrast to that, Matrixone lets the same SQL run successfully. We need to decide if apply the same rule to the precision value as MySQL does. 2. The retrieved value is different. As in MySQL, the default precision is 6 bits. For example, assuming that 123456789 was inserted into MySQL, it returns 123457000 when query, whereas Matrixone returns 123456792.0000 instead. #### Actual behavior: mysql> create table t_float(a float, b float(54)); Query OK, 0 rows affected (0.28 sec) mysql> insert into t_float values (123456789, 12345678.123456789); Query OK, 0 rows affected (0.08 sec) mysql> insert into t_float values (1.23456789, 1.23456789); Query OK, 0 rows affected (0.04 sec) mysql> select * from t_float; +----------------+---------------+ | a | b | +----------------+---------------+ | 123456792.0000 | 12345678.0000 | | 1.2346 | 1.2346 | +----------------+---------------+ 2 rows in set (0.01 sec) #### Environment: - Version or commit-id (e.g. v0.1.0 or 8b23a93):commit 06261a26e804acad91414ef9669aea9378bc7996 - Hardware parameters: - OS type: - Others: #### Configuration file: #### Additional context: - Error message from client: - Server log: - Other information:
priority
behavior of float type precision differs from mysql can be reproduced yes steps create table t float a float b float insert into t float values insert into t float values select from t float expected behavior there are two different behaviors when comparing against mysql according to mysql doc for float p the precision value p is used only to determine storage size a precision from to results in a byte single precision float column a precision from to results in an byte double precision double column due to the above rule if try to define a float type column with a precision value is then mysql will issue an error please see below mysql create table t float a float b float error incorrect column specifier for column b in contrast to that matrixone lets the same sql run successfully we need to decide if apply the same rule to the precision value as mysql does the retrieved value is different as in mysql the default precision is bits for example assuming that was inserted into mysql it returns when query whereas matrixone returns instead actual behavior mysql create table t float a float b float query ok rows affected sec mysql insert into t float values query ok rows affected sec mysql insert into t float values query ok rows affected sec mysql select from t float a b rows in set sec environment version or commit id e g or commit hardware parameters os type others configuration file additional context error message from client server log other information
1
797,448
28,146,468,406
IssuesEvent
2023-04-02 14:42:37
davidfstr/Crystal-Web-Archiver
https://api.github.com/repos/davidfstr/Crystal-Web-Archiver
opened
README: Improve appearance on PyPI
priority-high type-bug topic-firstrun
Problems: * [ ] Logo is broken image * [ ] Headings say `<small>` literally --- <img width="1209" alt="Screen Shot 2023-04-02 at 10 05 22 AM" src="https://user-images.githubusercontent.com/764688/229359841-96566cef-9d90-4606-b8c4-c62bb58771a6.png"> --- <img width="969" alt="Screen Shot 2023-04-02 at 10 05 35 AM" src="https://user-images.githubusercontent.com/764688/229359853-07a68084-87f4-4624-aa11-a24415b60ff7.png">
1.0
README: Improve appearance on PyPI - Problems: * [ ] Logo is broken image * [ ] Headings say `<small>` literally --- <img width="1209" alt="Screen Shot 2023-04-02 at 10 05 22 AM" src="https://user-images.githubusercontent.com/764688/229359841-96566cef-9d90-4606-b8c4-c62bb58771a6.png"> --- <img width="969" alt="Screen Shot 2023-04-02 at 10 05 35 AM" src="https://user-images.githubusercontent.com/764688/229359853-07a68084-87f4-4624-aa11-a24415b60ff7.png">
priority
readme improve appearance on pypi problems logo is broken image headings say literally img width alt screen shot at am src img width alt screen shot at am src
1
165,942
6,288,543,901
IssuesEvent
2017-07-19 17:12:48
datproject/dat-node
https://api.github.com/repos/datproject/dat-node
opened
dat.archive.writeFile results in empty file
Priority: High Type: Bug
Writing a file from archive in `dat-node` results in empty file. ```js var filePath = 'hello.txt' var fileContent = 'echo 123' Dat('./test', function (err, dat) { if (err) throw err dat.archive.writeFile(filePath, fileContent, function (err) { if (err) throw err console.log('done') }) }) ``` However, this works with plain hyperdrive and dat-storage: ```js var hyperdrive = require('hyperdrive') var storage = require('dat-storage') var filePath = 'hello.txt' var fileContent = 'echo 123' var archive = hyperdrive(storage('./test'), {latest: true}) archive.writeFile(filePath, fileContent, function (err) { if (err) throw err console.log('done') }) ```
1.0
dat.archive.writeFile results in empty file - Writing a file from archive in `dat-node` results in empty file. ```js var filePath = 'hello.txt' var fileContent = 'echo 123' Dat('./test', function (err, dat) { if (err) throw err dat.archive.writeFile(filePath, fileContent, function (err) { if (err) throw err console.log('done') }) }) ``` However, this works with plain hyperdrive and dat-storage: ```js var hyperdrive = require('hyperdrive') var storage = require('dat-storage') var filePath = 'hello.txt' var fileContent = 'echo 123' var archive = hyperdrive(storage('./test'), {latest: true}) archive.writeFile(filePath, fileContent, function (err) { if (err) throw err console.log('done') }) ```
priority
dat archive writefile results in empty file writing a file from archive in dat node results in empty file js var filepath hello txt var filecontent echo dat test function err dat if err throw err dat archive writefile filepath filecontent function err if err throw err console log done however this works with plain hyperdrive and dat storage js var hyperdrive require hyperdrive var storage require dat storage var filepath hello txt var filecontent echo var archive hyperdrive storage test latest true archive writefile filepath filecontent function err if err throw err console log done
1
219,516
7,343,203,617
IssuesEvent
2018-03-07 10:33:21
CS2103JAN2018-W13-B1/main
https://api.github.com/repos/CS2103JAN2018-W13-B1/main
opened
Week 8 Enhancements
priority.high type.task
Let's decide what we're going to be working on for v1.1. Comment your preferences below, so we can decide what each of us will be doing. Thanks!
1.0
Week 8 Enhancements - Let's decide what we're going to be working on for v1.1. Comment your preferences below, so we can decide what each of us will be doing. Thanks!
priority
week enhancements let s decide what we re going to be working on for comment your preferences below so we can decide what each of us will be doing thanks
1
521,944
15,145,805,276
IssuesEvent
2021-02-11 05:37:24
AstroHuntsman/huntsman-pocs
https://api.github.com/repos/AstroHuntsman/huntsman-pocs
closed
Dome tracking issues
bug dome high priority
- Stop tracking before issuing slew command so dome is definitely not moving - Make sure dome is not moving before issuing slew command
1.0
Dome tracking issues - - Stop tracking before issuing slew command so dome is definitely not moving - Make sure dome is not moving before issuing slew command
priority
dome tracking issues stop tracking before issuing slew command so dome is definitely not moving make sure dome is not moving before issuing slew command
1
670,945
22,711,394,413
IssuesEvent
2022-07-05 19:47:19
canonical/grafana-k8s-operator
https://api.github.com/repos/canonical/grafana-k8s-operator
closed
Upgrade grafana image to the last stable version of 8.5.x
Type: Enhancement Priority: High Status: Triage
### Enhancement Proposal Upgrade to 8.5 so we can add Prometheus Alertmanager datasource
1.0
Upgrade grafana image to the last stable version of 8.5.x - ### Enhancement Proposal Upgrade to 8.5 so we can add Prometheus Alertmanager datasource
priority
upgrade grafana image to the last stable version of x enhancement proposal upgrade to so we can add prometheus alertmanager datasource
1
38,564
2,848,898,778
IssuesEvent
2015-05-30 07:07:28
DashboardHub/Blog
https://api.github.com/repos/DashboardHub/Blog
opened
Add author name to blog posts
enhancement priority: high
* [ ] list page * [ ] blog page * [ ] bottom of blog page when listing other blog posts
1.0
Add author name to blog posts - * [ ] list page * [ ] blog page * [ ] bottom of blog page when listing other blog posts
priority
add author name to blog posts list page blog page bottom of blog page when listing other blog posts
1
68,914
3,293,766,153
IssuesEvent
2015-10-30 20:36:15
GluuFederation/oxTrust
https://api.github.com/repos/GluuFederation/oxTrust
closed
Render ldapURL value in attribute-resolver.xml and login.config
High Priority
In cluster edition, we have a usecase where we use 2 LDAP servers: 1. ldap-1: host `55422c825258` and port `1636` 2. ldap-2: host `cf96202f1860` and port `1636` Given these 2 LDAP servers, cluster renders the following value in `oxTrust.properties`: ``` idp.ldap.server=55422c825258:1636,cf96202f1860:1636 ``` which will generate `/opt/idp/conf/attribute-resolver.xml` as seen below: ```xml <!-- LDAP Connector --> <resolver:DataConnector id="siteLDAP" xsi:type="dc:LDAPDirectory" ldapURL="ldaps://55422c825258:1636,cf96202f1860:1636" </resolver:DataConnector> ``` Certainly, this is an invalid connection string since there's a comma character there. I've tried to change `oxTrust.properties` to replace the comma with whitespace: ``` idp.ldap.server=55422c825258:1636 cf96202f1860:1636 ``` This will generate `attribute-resolver.xml` like this: ```xml <!-- LDAP Connector --> <resolver:DataConnector id="siteLDAP" xsi:type="dc:LDAPDirectory" ldapURL="ldaps://55422c825258:1636 cf96202f1860:1636" </resolver:DataConnector> ``` From `/opt/idp/idp-process.log`, I can see that connection to LDAP `55422c825258:1636` (ldap-1) is worked. However, when I shutdown ldap-1, Shib will try to connect to `cf96202f1860:1636` (ldap-2), and the connection is failed due to incorrect connection string. After reading docs at https://wiki.shibboleth.net/confluence/display/SHIB2/ResolverLDAPDataConnector, I figured out when I change the connection string to: ``` ldapURL="ldaps://55422c825258:1636 ldaps://cf96202f1860:1636" ``` Shib able to connect to available ldap-2 even when ldap-1 is unavailable. I may be wrong, but I guess we need to change the way oxTrust reads `idp.ldap.server` value in `oxTrust.properties`. For example, when `idp.ldap.server` uses comma: ``` idp.ldap.server = 55422c825258:1636,cf96202f1860:1636 ``` or whitespace: ``` idp.ldap.server = 55422c825258:1636 cf96202f1860:1636 ``` it should be recognized as 2 different LDAP servers, which should render: ``` ldapURL="ldaps://55422c825258:1636 ldaps://cf96202f1860:1636" ``` in `attribute-resolver.xml`. Also I've found the similar issue in `/opt/idp/conf/login.config`.
1.0
Render ldapURL value in attribute-resolver.xml and login.config - In cluster edition, we have a usecase where we use 2 LDAP servers: 1. ldap-1: host `55422c825258` and port `1636` 2. ldap-2: host `cf96202f1860` and port `1636` Given these 2 LDAP servers, cluster renders the following value in `oxTrust.properties`: ``` idp.ldap.server=55422c825258:1636,cf96202f1860:1636 ``` which will generate `/opt/idp/conf/attribute-resolver.xml` as seen below: ```xml <!-- LDAP Connector --> <resolver:DataConnector id="siteLDAP" xsi:type="dc:LDAPDirectory" ldapURL="ldaps://55422c825258:1636,cf96202f1860:1636" </resolver:DataConnector> ``` Certainly, this is an invalid connection string since there's a comma character there. I've tried to change `oxTrust.properties` to replace the comma with whitespace: ``` idp.ldap.server=55422c825258:1636 cf96202f1860:1636 ``` This will generate `attribute-resolver.xml` like this: ```xml <!-- LDAP Connector --> <resolver:DataConnector id="siteLDAP" xsi:type="dc:LDAPDirectory" ldapURL="ldaps://55422c825258:1636 cf96202f1860:1636" </resolver:DataConnector> ``` From `/opt/idp/idp-process.log`, I can see that connection to LDAP `55422c825258:1636` (ldap-1) is worked. However, when I shutdown ldap-1, Shib will try to connect to `cf96202f1860:1636` (ldap-2), and the connection is failed due to incorrect connection string. After reading docs at https://wiki.shibboleth.net/confluence/display/SHIB2/ResolverLDAPDataConnector, I figured out when I change the connection string to: ``` ldapURL="ldaps://55422c825258:1636 ldaps://cf96202f1860:1636" ``` Shib able to connect to available ldap-2 even when ldap-1 is unavailable. I may be wrong, but I guess we need to change the way oxTrust reads `idp.ldap.server` value in `oxTrust.properties`. For example, when `idp.ldap.server` uses comma: ``` idp.ldap.server = 55422c825258:1636,cf96202f1860:1636 ``` or whitespace: ``` idp.ldap.server = 55422c825258:1636 cf96202f1860:1636 ``` it should be recognized as 2 different LDAP servers, which should render: ``` ldapURL="ldaps://55422c825258:1636 ldaps://cf96202f1860:1636" ``` in `attribute-resolver.xml`. Also I've found the similar issue in `/opt/idp/conf/login.config`.
priority
render ldapurl value in attribute resolver xml and login config in cluster edition we have a usecase where we use ldap servers ldap host and port ldap host and port given these ldap servers cluster renders the following value in oxtrust properties idp ldap server which will generate opt idp conf attribute resolver xml as seen below xml resolver dataconnector id siteldap xsi type dc ldapdirectory ldapurl ldaps certainly this is an invalid connection string since there s a comma character there i ve tried to change oxtrust properties to replace the comma with whitespace idp ldap server this will generate attribute resolver xml like this xml resolver dataconnector id siteldap xsi type dc ldapdirectory ldapurl ldaps from opt idp idp process log i can see that connection to ldap ldap is worked however when i shutdown ldap shib will try to connect to ldap and the connection is failed due to incorrect connection string after reading docs at i figured out when i change the connection string to ldapurl ldaps ldaps shib able to connect to available ldap even when ldap is unavailable i may be wrong but i guess we need to change the way oxtrust reads idp ldap server value in oxtrust properties for example when idp ldap server uses comma idp ldap server or whitespace idp ldap server it should be recognized as different ldap servers which should render ldapurl ldaps ldaps in attribute resolver xml also i ve found the similar issue in opt idp conf login config
1
76,228
3,485,111,423
IssuesEvent
2015-12-31 01:28:23
nus-cs2103/website
https://api.github.com/repos/nus-cs2103/website
closed
Remove headings.js and its reference
difficulty.easy priority.high type.bug
The file headings.js is not inside the contents folder, but is located in the scripts folder (which is not inside the contents folder). <img width="1734" alt="screen shot 2015-12-28 at 04 56 25" src="https://cloud.githubusercontent.com/assets/8586215/12012266/8aea110c-ad1f-11e5-87cc-da4b069664cb.png"> the file headings.html is inside the contents folder, hence the current way of calling headings.js will always result in a file not found error.
1.0
Remove headings.js and its reference - The file headings.js is not inside the contents folder, but is located in the scripts folder (which is not inside the contents folder). <img width="1734" alt="screen shot 2015-12-28 at 04 56 25" src="https://cloud.githubusercontent.com/assets/8586215/12012266/8aea110c-ad1f-11e5-87cc-da4b069664cb.png"> the file headings.html is inside the contents folder, hence the current way of calling headings.js will always result in a file not found error.
priority
remove headings js and its reference the file headings js is not inside the contents folder but is located in the scripts folder which is not inside the contents folder img width alt screen shot at src the file headings html is inside the contents folder hence the current way of calling headings js will always result in a file not found error
1
147,117
5,634,144,932
IssuesEvent
2017-04-05 20:37:20
creativedisturbance/podcasts
https://api.github.com/repos/creativedisturbance/podcasts
closed
Unable to Upload Image in Wordpress - Creative Disturbance!
Glitch High Priority Website
I am unable to upload an image when adding details for a new person in creative disturbance - word press. I am getting a blank screen like this when trying to upload an image. I tried it in multiple browsers and also in Incognito mode just to see if there is any issue with Cache , but the issue still exists. <img width="1278" alt="screen shot 2017-04-05 at 3 01 41 pm" src="https://cloud.githubusercontent.com/assets/25909611/24724650/8d63ac12-1a11-11e7-8f80-68c2fa209596.png">
1.0
Unable to Upload Image in Wordpress - Creative Disturbance! - I am unable to upload an image when adding details for a new person in creative disturbance - word press. I am getting a blank screen like this when trying to upload an image. I tried it in multiple browsers and also in Incognito mode just to see if there is any issue with Cache , but the issue still exists. <img width="1278" alt="screen shot 2017-04-05 at 3 01 41 pm" src="https://cloud.githubusercontent.com/assets/25909611/24724650/8d63ac12-1a11-11e7-8f80-68c2fa209596.png">
priority
unable to upload image in wordpress creative disturbance i am unable to upload an image when adding details for a new person in creative disturbance word press i am getting a blank screen like this when trying to upload an image i tried it in multiple browsers and also in incognito mode just to see if there is any issue with cache but the issue still exists img width alt screen shot at pm src
1
309,638
9,477,735,197
IssuesEvent
2019-04-19 19:45:06
AKmeier/Sieve-Mobile-Application
https://api.github.com/repos/AKmeier/Sieve-Mobile-Application
closed
Items are not automatically deleted
bug high priority
In the develop branch, tasks are not deleted immediately upon clicking "delete", but they are removed from the list after navigating off of home page and back.
1.0
Items are not automatically deleted - In the develop branch, tasks are not deleted immediately upon clicking "delete", but they are removed from the list after navigating off of home page and back.
priority
items are not automatically deleted in the develop branch tasks are not deleted immediately upon clicking delete but they are removed from the list after navigating off of home page and back
1
509,531
14,739,099,712
IssuesEvent
2021-01-07 06:29:04
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
accounts.google.com - see bug description
browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
<!-- @browser: Firefox Mobile 85.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/65088 --> <!-- @extra_labels: browser-fenix --> **URL**: https://accounts.google.com/signin/v2/challenge/iae/verify?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin&cid=27&navigationDirection=forward&TL=AM3QAYYngwoHEdsMMuQEfOvOKkUmteskl1cTxTXwYzoV0p0sN8iKFmQmoa6dCP40 **Browser / Version**: Firefox Mobile 85.0 **Operating System**: Android **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: doesnt load **Steps to Reproduce**: <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/7b32bf39-6ef6-4c46-a444-7bea40e1e237.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201223151005</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2021/1/0312716d-b106-4ca9-9fe4-a583c6896d3f) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
accounts.google.com - see bug description - <!-- @browser: Firefox Mobile 85.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/65088 --> <!-- @extra_labels: browser-fenix --> **URL**: https://accounts.google.com/signin/v2/challenge/iae/verify?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin&cid=27&navigationDirection=forward&TL=AM3QAYYngwoHEdsMMuQEfOvOKkUmteskl1cTxTXwYzoV0p0sN8iKFmQmoa6dCP40 **Browser / Version**: Firefox Mobile 85.0 **Operating System**: Android **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: doesnt load **Steps to Reproduce**: <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/7b32bf39-6ef6-4c46-a444-7bea40e1e237.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201223151005</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2021/1/0312716d-b106-4ca9-9fe4-a583c6896d3f) _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
accounts google com see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description doesnt load steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
1
528,189
15,361,590,231
IssuesEvent
2021-03-01 18:20:41
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
torch.nn.parallel.DistributedDataParallel is slow on backpropagation
high priority module: data parallel module: performance oncall: distributed triaged
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> We are training with Bert model from [this repository](https://github.com/microsoft/AzureML-BERT/tree/master). We found out that when using `torch.nn.parallel.DistributedDataParallel`, we incur a ~3x performance slowdown per iteration compared to the default Apex DistributedDataParallel. We broke down the training into steps and found out the major slowdown happened at the backpropagation step. `loss.backward()` is ~10x slower for models wrapped with `torch.nn.parallel.DistributedDataParallel` compared to `distributed_apex.DistributedDataParallel`. The time is measured with `torch.cuda.synchronize()` and we only use fp32 for calculation. ## To Reproduce Steps to reproduce the behavior: 1. follow the [notebook ](https://github.com/microsoft/AzureML-BERT/blob/master/pretrain/PyTorch/notebooks/BERT_Pretrain.ipynb) instruction to run the model 1. Switch the DDP package from `distributed_apex.DistributedDataParallel` to `torch.nn.parallel.DistributedDataParallel` at [this line](https://github.com/microsoft/AzureML-BERT/blob/be1d2f88a8ff20792a697c92fac1ba4c42fafd57/pretrain/PyTorch/train.py#L418) 1. Run the training script based on the [config file](https://github.com/microsoft/AzureML-BERT/blob/master/pretrain/configs/bert-large.json) <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> With full fp32 model, `torch.nn.parallel.DistributedDataParallel` should have around the same performance compared to `distributed_apex.DistributedDataParallel`. But it is actually ~3x slower on each iteration and ~10x slower on back propagation. ## Environment PyTorch version: 1.1.0a0+828a6a3 Is debug build: No CUDA used to build PyTorch: 10.1.163 OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 10.1.163 Nvidia driver version: 418.87.00 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.0 How you installed PyTorch: FROM nvcr.io/nvidia/pytorch:19.05-py3 ## Additional context <!-- Add any other context about the problem here. --> cc @ezyang @gchanan @zou3519 @jerryzh168 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @VitalyFedyunin @ngimel @mruberry
1.0
torch.nn.parallel.DistributedDataParallel is slow on backpropagation - ## 🐛 Bug <!-- A clear and concise description of what the bug is. --> We are training with Bert model from [this repository](https://github.com/microsoft/AzureML-BERT/tree/master). We found out that when using `torch.nn.parallel.DistributedDataParallel`, we incur a ~3x performance slowdown per iteration compared to the default Apex DistributedDataParallel. We broke down the training into steps and found out the major slowdown happened at the backpropagation step. `loss.backward()` is ~10x slower for models wrapped with `torch.nn.parallel.DistributedDataParallel` compared to `distributed_apex.DistributedDataParallel`. The time is measured with `torch.cuda.synchronize()` and we only use fp32 for calculation. ## To Reproduce Steps to reproduce the behavior: 1. follow the [notebook ](https://github.com/microsoft/AzureML-BERT/blob/master/pretrain/PyTorch/notebooks/BERT_Pretrain.ipynb) instruction to run the model 1. Switch the DDP package from `distributed_apex.DistributedDataParallel` to `torch.nn.parallel.DistributedDataParallel` at [this line](https://github.com/microsoft/AzureML-BERT/blob/be1d2f88a8ff20792a697c92fac1ba4c42fafd57/pretrain/PyTorch/train.py#L418) 1. Run the training script based on the [config file](https://github.com/microsoft/AzureML-BERT/blob/master/pretrain/configs/bert-large.json) <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> With full fp32 model, `torch.nn.parallel.DistributedDataParallel` should have around the same performance compared to `distributed_apex.DistributedDataParallel`. But it is actually ~3x slower on each iteration and ~10x slower on back propagation. ## Environment PyTorch version: 1.1.0a0+828a6a3 Is debug build: No CUDA used to build PyTorch: 10.1.163 OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 10.1.163 Nvidia driver version: 418.87.00 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.0 How you installed PyTorch: FROM nvcr.io/nvidia/pytorch:19.05-py3 ## Additional context <!-- Add any other context about the problem here. --> cc @ezyang @gchanan @zou3519 @jerryzh168 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @VitalyFedyunin @ngimel @mruberry
priority
torch nn parallel distributeddataparallel is slow on backpropagation 🐛 bug we are training with bert model from we found out that when using torch nn parallel distributeddataparallel we incur a performance slowdown per iteration compared to the default apex distributeddataparallel we broke down the training into steps and found out the major slowdown happened at the backpropagation step loss backward is slower for models wrapped with torch nn parallel distributeddataparallel compared to distributed apex distributeddataparallel the time is measured with torch cuda synchronize and we only use for calculation to reproduce steps to reproduce the behavior follow the instruction to run the model switch the ddp package from distributed apex distributeddataparallel to torch nn parallel distributeddataparallel at run the training script based on the expected behavior with full model torch nn parallel distributeddataparallel should have around the same performance compared to distributed apex distributeddataparallel but it is actually slower on each iteration and slower on back propagation environment pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version ubuntu cmake version version python version is cuda available yes cuda runtime version nvidia driver version cudnn version usr lib linux gnu libcudnn so how you installed pytorch from nvcr io nvidia pytorch additional context cc ezyang gchanan pietern mrshenli zhaojuanmao satgera rohan varma gqchen aazzolini vitalyfedyunin ngimel mruberry
1
652,084
21,521,070,618
IssuesEvent
2022-04-28 14:15:30
lucypoulton/acebook
https://api.github.com/repos/lucypoulton/acebook
closed
Continue making everything pretty
size: large type: maintenance priority: high
- [ ] Profile page - [ ] Move post delete button to be in line with like button - [ ] Post background colour - [ ] Move comment image box
1.0
Continue making everything pretty - - [ ] Profile page - [ ] Move post delete button to be in line with like button - [ ] Post background colour - [ ] Move comment image box
priority
continue making everything pretty profile page move post delete button to be in line with like button post background colour move comment image box
1
654,960
21,674,662,701
IssuesEvent
2022-05-08 14:16:33
Materials-Consortia/optimade-python-tools
https://api.github.com/repos/Materials-Consortia/optimade-python-tools
closed
`meta->data_returned` is incorrect for paginated results with MongoDB
bug priority/high server
According to the spec, the `data_returned` field should be: > data_returned: an integer containing the total number of data resource objects returned for the current filter query, independent of pagination. Currently, the query [`?filter=elements HAS "Ag"`](https://optimade.herokuapp.com/v1/structures?filter=elements%20HAS%20%22Ag%22) yields 11 for data_returned, but adding [`&page_offset=1`](https://optimade.herokuapp.com/v1/structures?filter=elements%20HAS%20%22Ag%22&page_offset=1) yields 10. To my eyes, this is a bug. Other implementations (omdb, oqmd) would return 11 for both queries.
1.0
`meta->data_returned` is incorrect for paginated results with MongoDB - According to the spec, the `data_returned` field should be: > data_returned: an integer containing the total number of data resource objects returned for the current filter query, independent of pagination. Currently, the query [`?filter=elements HAS "Ag"`](https://optimade.herokuapp.com/v1/structures?filter=elements%20HAS%20%22Ag%22) yields 11 for data_returned, but adding [`&page_offset=1`](https://optimade.herokuapp.com/v1/structures?filter=elements%20HAS%20%22Ag%22&page_offset=1) yields 10. To my eyes, this is a bug. Other implementations (omdb, oqmd) would return 11 for both queries.
priority
meta data returned is incorrect for paginated results with mongodb according to the spec the data returned field should be data returned an integer containing the total number of data resource objects returned for the current filter query independent of pagination currently the query yields for data returned but adding yields to my eyes this is a bug other implementations omdb oqmd would return for both queries
1
165,400
6,275,938,009
IssuesEvent
2017-07-18 08:21:42
python/mypy
https://api.github.com/repos/python/mypy
closed
Cannot determine type error with TypeVar with value constraint
bug priority-0-high topic-type-variables
Assuming the following code snippet: ```python from typing import Callable, TypeVar, Generic T = TypeVar('T', str, int) Printer = Callable[[T], None] class Klass(Generic[T]): def __init__(self, msg: str, p: Printer[T]) -> None: self.msg = msg self.p = p def echo(self) -> None: print(self.msg) ``` `mypy` returns error for `print(self.msg)`: `error: Cannot determine type of 'msg'` `reveal_type` fails to determine type of `self.msg`: `reveal_type(self.msg)` results in ``` error: Cannot determine type of 'msg' Revealed type is 'Any' ``` If we changed type variable to unconstrained, `T = TypeVar('T')`, everything works as expected.
1.0
Cannot determine type error with TypeVar with value constraint - Assuming the following code snippet: ```python from typing import Callable, TypeVar, Generic T = TypeVar('T', str, int) Printer = Callable[[T], None] class Klass(Generic[T]): def __init__(self, msg: str, p: Printer[T]) -> None: self.msg = msg self.p = p def echo(self) -> None: print(self.msg) ``` `mypy` returns error for `print(self.msg)`: `error: Cannot determine type of 'msg'` `reveal_type` fails to determine type of `self.msg`: `reveal_type(self.msg)` results in ``` error: Cannot determine type of 'msg' Revealed type is 'Any' ``` If we changed type variable to unconstrained, `T = TypeVar('T')`, everything works as expected.
priority
cannot determine type error with typevar with value constraint assuming the following code snippet python from typing import callable typevar generic t typevar t str int printer callable none class klass generic def init self msg str p printer none self msg msg self p p def echo self none print self msg mypy returns error for print self msg error cannot determine type of msg reveal type fails to determine type of self msg reveal type self msg results in error cannot determine type of msg revealed type is any if we changed type variable to unconstrained t typevar t everything works as expected
1
582,664
17,367,269,822
IssuesEvent
2021-07-30 09:03:37
GEWIS/gewisweb
https://api.github.com/repos/GEWIS/gewisweb
closed
Adding/removing sign-up lists to/from an activity prevents creating an update proposal
For: Backend Priority: High Status: Confirmed Type: Bug
Laminas handles the old `Parameters` object a bit different, which means that `fields` is sometimes undefined. https://github.com/GEWIS/gewisweb/blob/5338b66a4ca3b78e75831d54bfe2094dbb8df52a/module/Activity/src/Service/Activity.php#L531 Furthermore, the comparison code no longer functions (with the same reasoning as above).
1.0
Adding/removing sign-up lists to/from an activity prevents creating an update proposal - Laminas handles the old `Parameters` object a bit different, which means that `fields` is sometimes undefined. https://github.com/GEWIS/gewisweb/blob/5338b66a4ca3b78e75831d54bfe2094dbb8df52a/module/Activity/src/Service/Activity.php#L531 Furthermore, the comparison code no longer functions (with the same reasoning as above).
priority
adding removing sign up lists to from an activity prevents creating an update proposal laminas handles the old parameters object a bit different which means that fields is sometimes undefined furthermore the comparison code no longer functions with the same reasoning as above
1
142,414
5,474,986,582
IssuesEvent
2017-03-11 05:50:18
CS2103JAN2017-W14-B3/main
https://api.github.com/repos/CS2103JAN2017-W14-B3/main
opened
Implement natural language processing for add command
priority.high type.task
Here are the acceptable format(s): add <name> - adds a task which can be done anytime. add <name> by <deadline> - adds a task which have to be done by the specified deadline. Note the keyword by. add <name> from <start time> to <end time> - adds a event which will take place between start time and end time. Note the keyword from and to. Remember not to delete the previous version of the add command update with start time Eg. format: add /p /s /e /d /t
1.0
Implement natural language processing for add command - Here are the acceptable format(s): add <name> - adds a task which can be done anytime. add <name> by <deadline> - adds a task which have to be done by the specified deadline. Note the keyword by. add <name> from <start time> to <end time> - adds a event which will take place between start time and end time. Note the keyword from and to. Remember not to delete the previous version of the add command update with start time Eg. format: add /p /s /e /d /t
priority
implement natural language processing for add command here are the acceptable format s add adds a task which can be done anytime add by adds a task which have to be done by the specified deadline note the keyword by add from to adds a event which will take place between start time and end time note the keyword from and to remember not to delete the previous version of the add command update with start time eg format add p s e d t
1
147,191
5,634,916,612
IssuesEvent
2017-04-05 22:41:33
forcedotcom/scmt-server
https://api.github.com/repos/forcedotcom/scmt-server
closed
Timestamps are not migrated correctly.
bug help wanted investigation priority - high
Timestamps are not migrated correctly for replies and notes for some cases.
1.0
Timestamps are not migrated correctly. - Timestamps are not migrated correctly for replies and notes for some cases.
priority
timestamps are not migrated correctly timestamps are not migrated correctly for replies and notes for some cases
1
232,268
7,656,936,653
IssuesEvent
2018-05-10 17:57:40
redsunservers/LoadoutBugTracker
https://api.github.com/repos/redsunservers/LoadoutBugTracker
closed
Strange kills not counting on pyro skin
:bug: Bug :mag: Server :rotating_light: High Priority
I unboxed a strange combustible science earlier today that's supposed to track kills, but for some reason isn't. I've got it equipped, I've made multiple non-assist kills as pyro across multiple gamemodes and it just doesn't seem to track anything. This might be another case of description fuckery or me simply being retarted, but the name and description both say that it counts kills so it may be an _actual bug_ this time
1.0
Strange kills not counting on pyro skin - I unboxed a strange combustible science earlier today that's supposed to track kills, but for some reason isn't. I've got it equipped, I've made multiple non-assist kills as pyro across multiple gamemodes and it just doesn't seem to track anything. This might be another case of description fuckery or me simply being retarted, but the name and description both say that it counts kills so it may be an _actual bug_ this time
priority
strange kills not counting on pyro skin i unboxed a strange combustible science earlier today that s supposed to track kills but for some reason isn t i ve got it equipped i ve made multiple non assist kills as pyro across multiple gamemodes and it just doesn t seem to track anything this might be another case of description fuckery or me simply being retarted but the name and description both say that it counts kills so it may be an actual bug this time
1
34,652
2,783,969,745
IssuesEvent
2015-05-07 05:52:35
punongbayan-araullo/tickets
https://api.github.com/repos/punongbayan-araullo/tickets
opened
MCE receives new engagement acceptance request notifications for Low risk and Medium risk pursuits
other priority - high status - evaluating system - pursuits
MCE receives new engagement acceptance request notifications for Low risk and Medium risk pursuits ![image 1](https://cloud.githubusercontent.com/assets/12133602/7509205/0f7faec2-f4c0-11e4-9c5d-c8d06aa9ca2d.png)
1.0
MCE receives new engagement acceptance request notifications for Low risk and Medium risk pursuits - MCE receives new engagement acceptance request notifications for Low risk and Medium risk pursuits ![image 1](https://cloud.githubusercontent.com/assets/12133602/7509205/0f7faec2-f4c0-11e4-9c5d-c8d06aa9ca2d.png)
priority
mce receives new engagement acceptance request notifications for low risk and medium risk pursuits mce receives new engagement acceptance request notifications for low risk and medium risk pursuits
1
372,733
11,027,009,296
IssuesEvent
2019-12-06 08:25:48
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Running fp32 softmax on a fp16 tensor for dimensions other than the last fails during the backward pass
high priority module: autograd module: cuda triaged
## 🐛 Bug When training models in FP16, it is typical to run softmax operations (such as within attention) in FP32. It seems that when you run softmax on a dimension other than the last dimension of the tensor, the backward pass fails. ## To Reproduce ```python import torch import torch.nn.functional as F a = torch.rand(2, 2, 2, 2, requires_grad=True).half().cuda() b = F.softmax(a, dim=-1, dtype=torch.float32)[0, 0, 0, 0] c = F.softmax(a, dim=1, dtype=torch.float32)[0, 0, 0, 0] b.backward(retain_graph=True) c.backward(retain_graph=True) ``` Output is ``` Traceback (most recent call last): File "softmax_bug.py", line 9, in <module> c.backward(retain_graph=True) File "---/torch/tensor.py", line 166, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "---/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: expected scalar type Float but found Half ``` Note that `b.backward` does not fail. ## Expected behavior No exception ## Environment ``` PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.2.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: TITAN RTX GPU 1: TITAN RTX GPU 2: TITAN RTX GPU 3: TITAN RTX GPU 4: TITAN RTX GPU 5: TITAN RTX Nvidia driver version: 430.50 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.17.2 [pip] torch==1.3.1 [pip] torchvision==0.4.2 [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.3.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch [conda] torchvision 0.4.2 py37_cu101 pytorch ``` cc @ezyang @gchanan @zou3519 @jerryzh168 @SsnL @albanD @gqchen @ngimel
1.0
Running fp32 softmax on a fp16 tensor for dimensions other than the last fails during the backward pass - ## 🐛 Bug When training models in FP16, it is typical to run softmax operations (such as within attention) in FP32. It seems that when you run softmax on a dimension other than the last dimension of the tensor, the backward pass fails. ## To Reproduce ```python import torch import torch.nn.functional as F a = torch.rand(2, 2, 2, 2, requires_grad=True).half().cuda() b = F.softmax(a, dim=-1, dtype=torch.float32)[0, 0, 0, 0] c = F.softmax(a, dim=1, dtype=torch.float32)[0, 0, 0, 0] b.backward(retain_graph=True) c.backward(retain_graph=True) ``` Output is ``` Traceback (most recent call last): File "softmax_bug.py", line 9, in <module> c.backward(retain_graph=True) File "---/torch/tensor.py", line 166, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "---/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: expected scalar type Float but found Half ``` Note that `b.backward` does not fail. ## Expected behavior No exception ## Environment ``` PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.2.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: TITAN RTX GPU 1: TITAN RTX GPU 2: TITAN RTX GPU 3: TITAN RTX GPU 4: TITAN RTX GPU 5: TITAN RTX Nvidia driver version: 430.50 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.17.2 [pip] torch==1.3.1 [pip] torchvision==0.4.2 [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.3.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch [conda] torchvision 0.4.2 py37_cu101 pytorch ``` cc @ezyang @gchanan @zou3519 @jerryzh168 @SsnL @albanD @gqchen @ngimel
priority
running softmax on a tensor for dimensions other than the last fails during the backward pass 🐛 bug when training models in it is typical to run softmax operations such as within attention in it seems that when you run softmax on a dimension other than the last dimension of the tensor the backward pass fails to reproduce python import torch import torch nn functional as f a torch rand requires grad true half cuda b f softmax a dim dtype torch c f softmax a dim dtype torch b backward retain graph true c backward retain graph true output is traceback most recent call last file softmax bug py line in c backward retain graph true file torch tensor py line in backward torch autograd backward self gradient retain graph create graph file torch autograd init py line in backward allow unreachable true allow unreachable flag runtimeerror expected scalar type float but found half note that b backward does not fail expected behavior no exception environment pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version ubuntu cmake version version python version is cuda available yes cuda runtime version could not collect gpu models and configuration gpu titan rtx gpu titan rtx gpu titan rtx gpu titan rtx gpu titan rtx gpu titan rtx nvidia driver version cudnn version could not collect versions of relevant libraries numpy torch torchvision blas mkl mkl mkl service mkl fft mkl random pytorch pytorch torchvision pytorch cc ezyang gchanan ssnl alband gqchen ngimel
1
735,271
25,387,304,279
IssuesEvent
2022-11-21 23:14:48
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
opened
Further refine the use of diagnostics in TC-Pairs (draft)
type: enhancement alert: NEED MORE DEFINITION alert: NEED ACCOUNT KEY alert: NEED PROJECT ASSIGNMENT requestor: METplus Team MET: Tropical Cyclone Tools priority: high
## Describe the Enhancement ## In TC-Pairs... 1. When multiple diagnostic data sources are assigned to the same ATCF ID, make sure that all are written in separate TCDIAG lines after the corresponding TCMPR line. 2. Allow `match_to_track` to be to set to a value of `"BEST"`, and when it is, write a TCDIAG line for each track pair. When checking track locations, we'd check the BEST track (lat,lon), not the ADECK (lat,lon). 3. Support the case where `-adeck` and `-bdeck` are both set to forecast tracks (i.e. fcst-to-fcst comparison) and both have diagnostics defined. Make sure that 2 TCDIAG lines are written to the output. ### Time Estimate ### *Estimate the amount of work required here.* *Issues should represent approximately 1 to 3 days of work.* ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** - [ ] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
1.0
Further refine the use of diagnostics in TC-Pairs (draft) - ## Describe the Enhancement ## In TC-Pairs... 1. When multiple diagnostic data sources are assigned to the same ATCF ID, make sure that all are written in separate TCDIAG lines after the corresponding TCMPR line. 2. Allow `match_to_track` to be to set to a value of `"BEST"`, and when it is, write a TCDIAG line for each track pair. When checking track locations, we'd check the BEST track (lat,lon), not the ADECK (lat,lon). 3. Support the case where `-adeck` and `-bdeck` are both set to forecast tracks (i.e. fcst-to-fcst comparison) and both have diagnostics defined. Make sure that 2 TCDIAG lines are written to the output. ### Time Estimate ### *Estimate the amount of work required here.* *Issues should represent approximately 1 to 3 days of work.* ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** - [ ] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
priority
further refine the use of diagnostics in tc pairs draft describe the enhancement in tc pairs when multiple diagnostic data sources are assigned to the same atcf id make sure that all are written in separate tcdiag lines after the corresponding tcmpr line allow match to track to be to set to a value of best and when it is write a tcdiag line for each track pair when checking track locations we d check the best track lat lon not the adeck lat lon support the case where adeck and bdeck are both set to forecast tracks i e fcst to fcst comparison and both have diagnostics defined make sure that tcdiag lines are written to the output time estimate estimate the amount of work required here issues should represent approximately to days of work sub issues consider breaking the enhancement down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
1
215,701
7,296,836,089
IssuesEvent
2018-02-26 12:13:12
dagcoin/dagcoin
https://api.github.com/repos/dagcoin/dagcoin
closed
Editing of contact in address book crash app on Mac
bug high priority
<!--- Provide a general summary of the issue in the Title above --> When some existing data of contact is edited, app crashes while saving it ## Expected Behavior <!--- If you're describing a bug, tell us what should happen --> Contact data is saved successfully <!--- If you're suggesting a change/improvement, tell us how it should work --> ## Current Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> Error message: Uncaught exception: TypeError: Cannot read property 'opt' of undefined <!--- If suggesting a change/improvement, explain the difference from current behavior --> ## Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug, --> <!--- or ideas how to implement the addition or change --> ## Steps to Reproduce (for bugs) <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug. Include code to reproduce, if relevant --> 1. Go to address book 2. Save a contact 3. Edit contact first or last name 4. Save ## Context <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: * Environment name and version (e.g. Chrome 39, node.js 5.4): * Operating System and version (desktop or mobile): Mac OS High Sierra 10.13.1 * Link to your project: ![Screen Shot 2018-01-30 at 21.02.01.png](https://images.zenhubusercontent.com/5a6856fb4b5806bc2bc414a1/f31e0057-4b4c-40e1-8cf5-2102acc15549)
1.0
Editing of contact in address book crash app on Mac - <!--- Provide a general summary of the issue in the Title above --> When some existing data of contact is edited, app crashes while saving it ## Expected Behavior <!--- If you're describing a bug, tell us what should happen --> Contact data is saved successfully <!--- If you're suggesting a change/improvement, tell us how it should work --> ## Current Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> Error message: Uncaught exception: TypeError: Cannot read property 'opt' of undefined <!--- If suggesting a change/improvement, explain the difference from current behavior --> ## Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug, --> <!--- or ideas how to implement the addition or change --> ## Steps to Reproduce (for bugs) <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug. Include code to reproduce, if relevant --> 1. Go to address book 2. Save a contact 3. Edit contact first or last name 4. Save ## Context <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: * Environment name and version (e.g. Chrome 39, node.js 5.4): * Operating System and version (desktop or mobile): Mac OS High Sierra 10.13.1 * Link to your project: ![Screen Shot 2018-01-30 at 21.02.01.png](https://images.zenhubusercontent.com/5a6856fb4b5806bc2bc414a1/f31e0057-4b4c-40e1-8cf5-2102acc15549)
priority
editing of contact in address book crash app on mac when some existing data of contact is edited app crashes while saving it expected behavior contact data is saved successfully current behavior error message uncaught exception typeerror cannot read property opt of undefined possible solution steps to reproduce for bugs go to address book save a contact edit contact first or last name save context your environment version used environment name and version e g chrome node js operating system and version desktop or mobile mac os high sierra link to your project
1
102,716
4,159,154,160
IssuesEvent
2016-06-17 07:49:20
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
[customer] xcatd process uses up the cpu resource when xcatd port is scanned
component:xcatd priority:high status:pending type:bug xCAT 2.12.1 Sprint 1
Two examples shown using a openssl client or nap to scan cat port 3001 * Run openssl client from IO node ~~~~ $ openssl s_client -quiet -no_ssl3 -no_ssl2 -connect 192.168.45.20:3001 -rand /bin/nice 70728 semi-random bytes loaded depth=1 CN = xCAT CA verify error:num=19:self signed certificate in certificate chain verify return:0 test ~~~~ Top on xcat management node ~~~~ op - 15:31:13 up 133 days, 15:19, 3 users, load average: 7.47, 11.31, 5.67 Tasks: 280 total, 27 running, 253 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.5 us, 2.1 sy, 0.0 ni, 87.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 6776320 total, 759104 free, 756288 used, 5260928 buff/cache KiB Swap: 4095936 total, 4073152 free, 22784 used. 5068672 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16518 root 20 0 58176 49344 7168 R 100.0 0.7 0:40.29 xCATd SSL: Inst ~~~~ * From IO node run nmap: ~~~~ $ nmap -sV --reason -PN -n -p 3001 192.168.45.20 Starting Nmap 6.40 ( http://nmap.org ) at 2016-04-29 15:24 PDT Nmap scan report for 192.168.45.20 Host is up, received arp-response (0.000063s latency). PORT STATE SERVICE REASON VERSION 3001/tcp open ssl/nessus? syn-ack MAC Address: E4:1F:13:06:20:40 (IBM) Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 151.90 seconds ~~~~ Top on xcat management node. ~~~~ op - 15:27:01 up 133 days, 15:14, 3 users, load average: 17.91, 6.16, 2.25 Tasks: 304 total, 50 running, 254 sleeping, 0 stopped, 0 zombie %Cpu(s): 90.0 us, 10.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 6776320 total, 308160 free, 1207232 used, 5260928 buff/cache KiB Swap: 4095936 total, 4073152 free, 22784 used. 4617728 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16253 root 20 0 58176 49408 7168 R 35.5 0.7 0:37.67 xCATd SSL: Inst 16315 root 20 0 58176 49408 7168 R 35.5 0.7 0:08.34 xCATd SSL: Inst 16183 root 20 0 58176 49408 7168 R 35.2 0.7 1:36.84 xCATd SSL: Inst 16287 root 20 0 58176 49408 7168 R 34.2 0.7 0:18.27 xCATd SSL: Inst 16207 root 20 0 58176 49408 7168 R 33.6 0.7 1:11.96 xCATd SSL: Inst 16219 root 20 0 58176 49408 7168 R 33.6 0.7 1:01.39 xCATd SSL: Inst 16223 root 20 0 58176 49408 7168 R 33.6 0.7 0:58.95 xCATd SSL: Inst 16270 root 20 0 58176 49408 7168 R 33.6 0.7 0:28.34 xCATd SSL: Inst 16275 root 20 0 58176 49408 7168 R 33.6 0.7 0:22.27 xCATd SSL: Inst 16308 root 20 0 58176 49408 7168 R 33.6 0.7 0:10.09 xCATd SSL: Inst 16191 root 20 0 58176 49408 7168 R 33.2 0.7 1:36.64 xCATd SSL: Inst 16231 root 20 0 58176 49408 7168 R 33.2 0.7 0:57.78 xCATd SSL: Inst 16248 root 20 0 58176 49408 7168 R 33.2 0.7 0:36.95 xCATd SSL: Inst 16257 root 20 0 58176 49408 7168 R 33.2 0.7 0:33.88 xCATd SSL: Inst 16265 root 20 0 58176 49408 7168 R 33.2 0.7 0:29.35 xCATd SSL: Inst 16299 root 20 0 58176 49408 7168 R 33.2 0.7 0:13.83 xCATd SSL: Inst 16304 root 20 0 58176 49408 7168 R 33.2 0.7 0:11.65 xCATd SSL: Inst 16203 root 20 0 58176 49408 7168 R 32.9 0.7 1:23.83 xCATd SSL: Inst 16241 root 20 0 58176 49408 7168 R 32.9 0.7 0:42.27 xCATd SSL: Inst 16291 root 20 0 58176 49408 7168 R 32.9 0.7 0:15.95 xCATd SSL: Inst 16212 root 20 0 58176 49408 7168 R 32.6 0.7 1:05.56 xCATd SSL: Inst 16237 root 20 0 58176 49408 7168 R 32.2 0.7 0:47.04 xCATd SSL: Inst 16282 root 20 0 58176 49408 7168 R 31.6 0.7 0:19.98 xCATd SSL: Inst 16196 root 20 0 58176 49408 7168 R 30.9 0.7 1:25.43 xCATd SSL: Inst ~~~~
1.0
[customer] xcatd process uses up the cpu resource when xcatd port is scanned - Two examples shown using a openssl client or nap to scan cat port 3001 * Run openssl client from IO node ~~~~ $ openssl s_client -quiet -no_ssl3 -no_ssl2 -connect 192.168.45.20:3001 -rand /bin/nice 70728 semi-random bytes loaded depth=1 CN = xCAT CA verify error:num=19:self signed certificate in certificate chain verify return:0 test ~~~~ Top on xcat management node ~~~~ op - 15:31:13 up 133 days, 15:19, 3 users, load average: 7.47, 11.31, 5.67 Tasks: 280 total, 27 running, 253 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.5 us, 2.1 sy, 0.0 ni, 87.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 6776320 total, 759104 free, 756288 used, 5260928 buff/cache KiB Swap: 4095936 total, 4073152 free, 22784 used. 5068672 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16518 root 20 0 58176 49344 7168 R 100.0 0.7 0:40.29 xCATd SSL: Inst ~~~~ * From IO node run nmap: ~~~~ $ nmap -sV --reason -PN -n -p 3001 192.168.45.20 Starting Nmap 6.40 ( http://nmap.org ) at 2016-04-29 15:24 PDT Nmap scan report for 192.168.45.20 Host is up, received arp-response (0.000063s latency). PORT STATE SERVICE REASON VERSION 3001/tcp open ssl/nessus? syn-ack MAC Address: E4:1F:13:06:20:40 (IBM) Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 151.90 seconds ~~~~ Top on xcat management node. ~~~~ op - 15:27:01 up 133 days, 15:14, 3 users, load average: 17.91, 6.16, 2.25 Tasks: 304 total, 50 running, 254 sleeping, 0 stopped, 0 zombie %Cpu(s): 90.0 us, 10.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 6776320 total, 308160 free, 1207232 used, 5260928 buff/cache KiB Swap: 4095936 total, 4073152 free, 22784 used. 4617728 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16253 root 20 0 58176 49408 7168 R 35.5 0.7 0:37.67 xCATd SSL: Inst 16315 root 20 0 58176 49408 7168 R 35.5 0.7 0:08.34 xCATd SSL: Inst 16183 root 20 0 58176 49408 7168 R 35.2 0.7 1:36.84 xCATd SSL: Inst 16287 root 20 0 58176 49408 7168 R 34.2 0.7 0:18.27 xCATd SSL: Inst 16207 root 20 0 58176 49408 7168 R 33.6 0.7 1:11.96 xCATd SSL: Inst 16219 root 20 0 58176 49408 7168 R 33.6 0.7 1:01.39 xCATd SSL: Inst 16223 root 20 0 58176 49408 7168 R 33.6 0.7 0:58.95 xCATd SSL: Inst 16270 root 20 0 58176 49408 7168 R 33.6 0.7 0:28.34 xCATd SSL: Inst 16275 root 20 0 58176 49408 7168 R 33.6 0.7 0:22.27 xCATd SSL: Inst 16308 root 20 0 58176 49408 7168 R 33.6 0.7 0:10.09 xCATd SSL: Inst 16191 root 20 0 58176 49408 7168 R 33.2 0.7 1:36.64 xCATd SSL: Inst 16231 root 20 0 58176 49408 7168 R 33.2 0.7 0:57.78 xCATd SSL: Inst 16248 root 20 0 58176 49408 7168 R 33.2 0.7 0:36.95 xCATd SSL: Inst 16257 root 20 0 58176 49408 7168 R 33.2 0.7 0:33.88 xCATd SSL: Inst 16265 root 20 0 58176 49408 7168 R 33.2 0.7 0:29.35 xCATd SSL: Inst 16299 root 20 0 58176 49408 7168 R 33.2 0.7 0:13.83 xCATd SSL: Inst 16304 root 20 0 58176 49408 7168 R 33.2 0.7 0:11.65 xCATd SSL: Inst 16203 root 20 0 58176 49408 7168 R 32.9 0.7 1:23.83 xCATd SSL: Inst 16241 root 20 0 58176 49408 7168 R 32.9 0.7 0:42.27 xCATd SSL: Inst 16291 root 20 0 58176 49408 7168 R 32.9 0.7 0:15.95 xCATd SSL: Inst 16212 root 20 0 58176 49408 7168 R 32.6 0.7 1:05.56 xCATd SSL: Inst 16237 root 20 0 58176 49408 7168 R 32.2 0.7 0:47.04 xCATd SSL: Inst 16282 root 20 0 58176 49408 7168 R 31.6 0.7 0:19.98 xCATd SSL: Inst 16196 root 20 0 58176 49408 7168 R 30.9 0.7 1:25.43 xCATd SSL: Inst ~~~~
priority
xcatd process uses up the cpu resource when xcatd port is scanned two examples shown using a openssl client or nap to scan cat port run openssl client from io node openssl s client quiet no no connect rand bin nice semi random bytes loaded depth cn xcat ca verify error num self signed certificate in certificate chain verify return test top on xcat management node op up days users load average tasks total running sleeping stopped zombie cpu s us sy ni id wa hi si st kib mem total free used buff cache kib swap total free used avail mem pid user pr ni virt res shr s cpu mem time command root r xcatd ssl inst from io node run nmap nmap sv reason pn n p starting nmap at pdt nmap scan report for host is up received arp response latency port state service reason version tcp open ssl nessus syn ack mac address ibm service detection performed please report any incorrect results at nmap done ip address host up scanned in seconds top on xcat management node op up days users load average tasks total running sleeping stopped zombie cpu s us sy ni id wa hi si st kib mem total free used buff cache kib swap total free used avail mem pid user pr ni virt res shr s cpu mem time command root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst root r xcatd ssl inst
1
498,251
14,404,333,361
IssuesEvent
2020-12-03 17:09:11
Scholar-6/brillder
https://api.github.com/repos/Scholar-6/brillder
closed
Add word count and reading estimate in Synthesis @150 words/minute
Adjustments High Level Priority
- [x] think there's a +1 error on words, it counts one more than I type - [x] if less than 2 mins, should be 1 **min** X secs - [x] if 0 mins, hide and show only seconds
1.0
Add word count and reading estimate in Synthesis @150 words/minute - - [x] think there's a +1 error on words, it counts one more than I type - [x] if less than 2 mins, should be 1 **min** X secs - [x] if 0 mins, hide and show only seconds
priority
add word count and reading estimate in synthesis words minute think there s a error on words it counts one more than i type if less than mins should be min x secs if mins hide and show only seconds
1
291,287
8,922,772,914
IssuesEvent
2019-01-21 13:57:04
CCAFS/MARLO
https://api.github.com/repos/CCAFS/MARLO
closed
MARLO Family meeting: Action points 14-09-2018
Priority - High Type - Enhancement
- [ ] Improve the linkages of funding sources with indication to the cross-cutting issues - [x] Include in the Synthesis section all the Expected Studies that are On-going and Extended.
1.0
MARLO Family meeting: Action points 14-09-2018 - - [ ] Improve the linkages of funding sources with indication to the cross-cutting issues - [x] Include in the Synthesis section all the Expected Studies that are On-going and Extended.
priority
marlo family meeting action points improve the linkages of funding sources with indication to the cross cutting issues include in the synthesis section all the expected studies that are on going and extended
1
756,555
26,476,138,249
IssuesEvent
2023-01-17 11:16:09
mantidproject/mantid
https://api.github.com/repos/mantidproject/mantid
opened
Colorbar not positioned correctly in Colorfill and Surface plots
High Priority Bug GUI Found in Beta
https://github.com/mantidproject/mantid/issues/34945#issuecomment-1384081041 **Describe the bug** This is likely to be the caused by calling tight_layout incorrectly. **To Reproduce** Open a colorfill or surface plot. ![](https://user-images.githubusercontent.com/55980573/212691715-5ba1746f-655b-4291-a01b-6226f27f5a06.png) and the error `This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.` **Expected behavior** Colorbar spaced appropriately to the right for any window size. **Platform/Version (please complete the following information):** - OS: all - Mantid Version nightly 15th Jan Manual testing
1.0
Colorbar not positioned correctly in Colorfill and Surface plots - https://github.com/mantidproject/mantid/issues/34945#issuecomment-1384081041 **Describe the bug** This is likely to be the caused by calling tight_layout incorrectly. **To Reproduce** Open a colorfill or surface plot. ![](https://user-images.githubusercontent.com/55980573/212691715-5ba1746f-655b-4291-a01b-6226f27f5a06.png) and the error `This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.` **Expected behavior** Colorbar spaced appropriately to the right for any window size. **Platform/Version (please complete the following information):** - OS: all - Mantid Version nightly 15th Jan Manual testing
priority
colorbar not positioned correctly in colorfill and surface plots describe the bug this is likely to be the caused by calling tight layout incorrectly to reproduce open a colorfill or surface plot and the error this figure includes axes that are not compatible with tight layout so results might be incorrect expected behavior colorbar spaced appropriately to the right for any window size platform version please complete the following information os all mantid version nightly jan manual testing
1
310,460
9,498,924,848
IssuesEvent
2019-04-24 04:04:51
thecyberdetective/Tournament
https://api.github.com/repos/thecyberdetective/Tournament
opened
Various UI Fixes
high priority todo
a) Hide the Team Column when teams aren't enabled b) Left Align Headers (Name, Team, Personality) c) Replace Team # with Team Name d) Fix padding to be more natural e) Put a scrollviewer on participants list
1.0
Various UI Fixes - a) Hide the Team Column when teams aren't enabled b) Left Align Headers (Name, Team, Personality) c) Replace Team # with Team Name d) Fix padding to be more natural e) Put a scrollviewer on participants list
priority
various ui fixes a hide the team column when teams aren t enabled b left align headers name team personality c replace team with team name d fix padding to be more natural e put a scrollviewer on participants list
1
84,434
3,664,759,050
IssuesEvent
2016-02-19 13:22:27
klusekrules/space-explorers
https://api.github.com/repos/klusekrules/space-explorers
closed
Brak łapania wyjątku w klasie Konsola
priority - high type - bug
Nie jest obsługiwana sytuacja wystąpienia wyjątku w klasie Konsola. Powoduje zamknięcie okna i brak możliwości wyłączenia programu.
1.0
Brak łapania wyjątku w klasie Konsola - Nie jest obsługiwana sytuacja wystąpienia wyjątku w klasie Konsola. Powoduje zamknięcie okna i brak możliwości wyłączenia programu.
priority
brak łapania wyjątku w klasie konsola nie jest obsługiwana sytuacja wystąpienia wyjątku w klasie konsola powoduje zamknięcie okna i brak możliwości wyłączenia programu
1
540,586
15,814,066,801
IssuesEvent
2021-04-05 08:49:38
AY2021S2-CS2103T-T12-4/tp
https://api.github.com/repos/AY2021S2-CS2103T-T12-4/tp
closed
[PE-D] Capitalisation of imposter.jar in UG
priority.High severity.VeryLow type.Bug
![Screenshot 2021-04-03 at 2.44.24 PM.png](https://raw.githubusercontent.com/samuelfangjw/ped/main/files/f3838b34-829d-4123-b558-241025500f49.png) In the UG, it says `imposter.jar` but the jar file is named `imPoster.jar` instead. <!--session: 1617429943449-e56bf182-40ef-47bf-9213-d664c9ca18a6--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: samuelfangjw/ped#11
1.0
[PE-D] Capitalisation of imposter.jar in UG - ![Screenshot 2021-04-03 at 2.44.24 PM.png](https://raw.githubusercontent.com/samuelfangjw/ped/main/files/f3838b34-829d-4123-b558-241025500f49.png) In the UG, it says `imposter.jar` but the jar file is named `imPoster.jar` instead. <!--session: 1617429943449-e56bf182-40ef-47bf-9213-d664c9ca18a6--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: samuelfangjw/ped#11
priority
capitalisation of imposter jar in ug in the ug it says imposter jar but the jar file is named imposter jar instead labels severity verylow type documentationbug original samuelfangjw ped
1
657,651
21,799,367,048
IssuesEvent
2022-05-16 02:04:33
okTurtles/group-income
https://api.github.com/repos/okTurtles/group-income
closed
Implement distribution date banner warning widget on dashboard
Kind:Enhancement App:Frontend Priority:High Note:UI/UX
### Problem The distribution date banner warning has been wonderfully designed by @leihla and is ready for implementation. <img width="694" alt="Screen Shot 2022-04-27 at 7 35 42 PM" src="https://user-images.githubusercontent.com/138706/165665456-153d2bff-ed80-44c7-b11d-74c9caea46b5.png"> ### Solution Send a PR with implemented design. Figma: https://www.figma.com/file/mxGadAHfkWH6qApebQvcdN/Group-Income-2.0?node-id=22627%3A203079
1.0
Implement distribution date banner warning widget on dashboard - ### Problem The distribution date banner warning has been wonderfully designed by @leihla and is ready for implementation. <img width="694" alt="Screen Shot 2022-04-27 at 7 35 42 PM" src="https://user-images.githubusercontent.com/138706/165665456-153d2bff-ed80-44c7-b11d-74c9caea46b5.png"> ### Solution Send a PR with implemented design. Figma: https://www.figma.com/file/mxGadAHfkWH6qApebQvcdN/Group-Income-2.0?node-id=22627%3A203079
priority
implement distribution date banner warning widget on dashboard problem the distribution date banner warning has been wonderfully designed by leihla and is ready for implementation img width alt screen shot at pm src solution send a pr with implemented design figma
1