Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
92,802 | 3,874,061,055 | IssuesEvent | 2016-04-11 19:09:28 | PolarisSS13/Polaris | https://api.github.com/repos/PolarisSS13/Polaris | closed | You can still lock down traitorsynths | Bug Duplicate Bug Report Priority: High | #### Brief description of the issue
An AI successfully locked down a Traitor stationbound.
This is still happening. **Again**. Agaegrrrr.
#### What you expected to happen
Nothing.
#### What actually happened
Locked successfully.
#### Steps to reproduce
Yeah yeah
#### Additional info:
- **Server Revision**: master - 2016-04-07 d9dd1221fc006ab95b822f44fa20eda4347ff0f0
- **Anything else you may wish to add** (Location if it's a mapping issue, etc)
| 1.0 | You can still lock down traitorsynths - #### Brief description of the issue
An AI successfully locked down a Traitor stationbound.
This is still happening. **Again**. Agaegrrrr.
#### What you expected to happen
Nothing.
#### What actually happened
Locked successfully.
#### Steps to reproduce
Yeah yeah
#### Additional info:
- **Server Revision**: master - 2016-04-07 d9dd1221fc006ab95b822f44fa20eda4347ff0f0
- **Anything else you may wish to add** (Location if it's a mapping issue, etc)
| priority | you can still lock down traitorsynths brief description of the issue an ai successfully locked down a traitor stationbound this is still happening again agaegrrrr what you expected to happen nothing what actually happened locked successfully steps to reproduce yeah yeah additional info server revision master anything else you may wish to add location if it s a mapping issue etc | 1 |
52,065 | 3,020,655,855 | IssuesEvent | 2015-07-31 09:29:02 | axsh/wakame-vdc | https://api.github.com/repos/axsh/wakame-vdc | closed | Split yum repository into development and stable | Priority : High Type : Feature | ### Problem
Currently we have one single yum repository for Wakame-vdc. This was ok since we only offered our current master branch to which all feature branches were merged directly.
When we release Version 1.0, we will implement semantic versioning and thus have both a stable and a development version.
### Solution
https://github.com/axsh/wakame-vdc/issues/614 should be completed first.
1. [x] Have the current rpmbuild CI job build packages from the branch `develop` instead of master.
2. [x] Create a new yum repository for stable releases.
3. [ ] Create a new CI job that builds the rpm packages for said stable release. We have made a CI job like this for OpenVNet and this one should behave in the same way.
* Version numbers will be assigned using `git tag` on every commit of a master branch.
* The stable package build job will be done manually and take `version` as a parameter.
* The job should then checkout the provided tag and build RPM packages from there. | 1.0 | Split yum repository into development and stable - ### Problem
Currently we have one single yum repository for Wakame-vdc. This was ok since we only offered our current master branch to which all feature branches were merged directly.
When we release Version 1.0, we will implement semantic versioning and thus have both a stable and a development version.
### Solution
https://github.com/axsh/wakame-vdc/issues/614 should be completed first.
1. [x] Have the current rpmbuild CI job build packages from the branch `develop` instead of master.
2. [x] Create a new yum repository for stable releases.
3. [ ] Create a new CI job that builds the rpm packages for said stable release. We have made a CI job like this for OpenVNet and this one should behave in the same way.
* Version numbers will be assigned using `git tag` on every commit of a master branch.
* The stable package build job will be done manually and take `version` as a parameter.
* The job should then checkout the provided tag and build RPM packages from there. | priority | split yum repository into development and stable problem currently we have one single yum repository for wakame vdc this was ok since we only offered our current master branch to which all feature branches were merged directly when we release version we will implement semantic versioning and thus have both a stable and a development version solution should be completed first have the current rpmbuild ci job build packages from the branch develop instead of master create a new yum repository for stable releases create a new ci job that builds the rpm packages for said stable release we have made a ci job like this for openvnet and this one should behave in the same way version numbers will be assigned using git tag on every commit of a master branch the stable package build job will be done manually and take version as a parameter the job should then checkout the provided tag and build rpm packages from there | 1 |
488,238 | 14,074,891,221 | IssuesEvent | 2020-11-04 08:10:31 | DIAGNijmegen/website-content | https://api.github.com/repos/DIAGNijmegen/website-content | closed | Customizable css for each website | Priority: High enhancement | ~Implement publication page version without grayish background, and with 100% black letters and with open sans as font.~
Implement css customization options for each individual website
| 1.0 | Customizable css for each website - ~Implement publication page version without grayish background, and with 100% black letters and with open sans as font.~
Implement css customization options for each individual website
| priority | customizable css for each website implement publication page version without grayish background and with black letters and with open sans as font implement css customization options for each individual website | 1 |
379,891 | 11,243,425,673 | IssuesEvent | 2020-01-10 03:04:08 | SesameCrew/sesame_issues | https://api.github.com/repos/SesameCrew/sesame_issues | closed | Email/call/message icons don't show | bug high priority | Rooted Samsung Galaxy S7 (lineage os) + open gapps + Google messages / contacts
Before root (Samsung messages/contacts app were installed) there were email/call/message icons on right side of contact when searching. Now I have shortcuts to message/call but no icons, which have been easier to use than additional shortcuts.
I suggest implementing email/call/message shortcuts to open gapps
 | 1.0 | Email/call/message icons don't show - Rooted Samsung Galaxy S7 (lineage os) + open gapps + Google messages / contacts
Before root (Samsung messages/contacts app were installed) there were email/call/message icons on right side of contact when searching. Now I have shortcuts to message/call but no icons, which have been easier to use than additional shortcuts.
I suggest implementing email/call/message shortcuts to open gapps
 | priority | email call message icons don t show rooted samsung galaxy lineage os open gapps google messages contacts before root samsung messages contacts app were installed there were email call message icons on right side of contact when searching now i have shortcuts to message call but no icons which have been easier to use than additional shortcuts i suggest implementing email call message shortcuts to open gapps | 1 |
436,926 | 12,555,854,308 | IssuesEvent | 2020-06-07 07:41:40 | decentralized-identity/sidetree | https://api.github.com/repos/decentralized-identity/sidetree | closed | Make updates use public key hash in commit reveal | Spec v1 beta high priority | - remove ops keys
- updates will use public key hash as commit reveal | 1.0 | Make updates use public key hash in commit reveal - - remove ops keys
- updates will use public key hash as commit reveal | priority | make updates use public key hash in commit reveal remove ops keys updates will use public key hash as commit reveal | 1 |
231,361 | 7,631,476,048 | IssuesEvent | 2018-05-05 02:17:17 | fathyb/parcel-plugin-typescript | https://api.github.com/repos/fathyb/parcel-plugin-typescript | closed | The plugin is not compatible with Parcel 1.7.1 | bug high priority regression | ## Deps
```
parcel-bundler@1.7.1
typescript@2.8.3
```
## Error
```
/Users/item4/Projects/item4.github.io/node_modules/parcel-bundler/src/workerfarm/Worker.js:114
throw new Error(
^
Error: Worker Farm: Received message for unknown index for existing child. This should not happen!
at Worker.receive (/Users/item4/Projects/item4.github.io/node_modules/parcel-bundler/src/workerfarm/Worker.js:114:15)
at ChildProcess.emit (events.js:180:13)
at emit (internal/child_process.js:783:12)
at process._tickCallback (internal/process/next_tick.js:114:19)
error Command failed with exit code 1.
```
## Temporary fix solution
remove parcel-plugin-typescript
## reproduce codebase
https://github.com/item4/item4.github.io
(sorry, I can not make simple case) | 1.0 | The plugin is not compatible with Parcel 1.7.1 - ## Deps
```
parcel-bundler@1.7.1
typescript@2.8.3
```
## Error
```
/Users/item4/Projects/item4.github.io/node_modules/parcel-bundler/src/workerfarm/Worker.js:114
throw new Error(
^
Error: Worker Farm: Received message for unknown index for existing child. This should not happen!
at Worker.receive (/Users/item4/Projects/item4.github.io/node_modules/parcel-bundler/src/workerfarm/Worker.js:114:15)
at ChildProcess.emit (events.js:180:13)
at emit (internal/child_process.js:783:12)
at process._tickCallback (internal/process/next_tick.js:114:19)
error Command failed with exit code 1.
```
## Temporary fix solution
remove parcel-plugin-typescript
## reproduce codebase
https://github.com/item4/item4.github.io
(sorry, I can not make simple case) | priority | the plugin is not compatible with parcel deps parcel bundler typescript error users projects github io node modules parcel bundler src workerfarm worker js throw new error error worker farm received message for unknown index for existing child this should not happen at worker receive users projects github io node modules parcel bundler src workerfarm worker js at childprocess emit events js at emit internal child process js at process tickcallback internal process next tick js error command failed with exit code temporary fix solution remove parcel plugin typescript reproduce codebase sorry i can not make simple case | 1 |
355,177 | 10,577,143,010 | IssuesEvent | 2019-10-07 19:26:32 | careerfairsystems/nexpo | https://api.github.com/repos/careerfairsystems/nexpo | opened | We need dat' CV code fam | API Backend Frontend Priority: High Type: Question | I apppen får vi en URL till Amazon S3 där cv'et lagras, men får access denied när vi följer länken. Vi behöver härma hur ni gör när ni gör för att komma åt CV. Antingen behöver vi era secret keys eller att ni öppnar en nexpo url där man kan läsa cv't i webläsaren. Please help. | 1.0 | We need dat' CV code fam - I apppen får vi en URL till Amazon S3 där cv'et lagras, men får access denied när vi följer länken. Vi behöver härma hur ni gör när ni gör för att komma åt CV. Antingen behöver vi era secret keys eller att ni öppnar en nexpo url där man kan läsa cv't i webläsaren. Please help. | priority | we need dat cv code fam i apppen får vi en url till amazon där cv et lagras men får access denied när vi följer länken vi behöver härma hur ni gör när ni gör för att komma åt cv antingen behöver vi era secret keys eller att ni öppnar en nexpo url där man kan läsa cv t i webläsaren please help | 1 |
670,468 | 22,690,735,692 | IssuesEvent | 2022-07-04 19:50:28 | restarone/violet_rails | https://api.github.com/repos/restarone/violet_rails | closed | add subject to send_email API action, make subject / body dynamic, expose API entities in context | enhancement high priority WIP API - data pipeline v3 | Currently the email address and body are hard coded. The goal of this task is to add a subject as well and make both the subject and body dynamic so we can use Ruby-style string interpolation using `#{}`
The API Namespace/Resource should be exposed in this context as well, so we can do something like:
subject: `"#{api_resource.properties[:name]} welcome!"`
body `"We can help you with #{api_resource.properties[:problem]} !"`

| 1.0 | add subject to send_email API action, make subject / body dynamic, expose API entities in context - Currently the email address and body are hard coded. The goal of this task is to add a subject as well and make both the subject and body dynamic so we can use Ruby-style string interpolation using `#{}`
The API Namespace/Resource should be exposed in this context as well, so we can do something like:
subject: `"#{api_resource.properties[:name]} welcome!"`
body `"We can help you with #{api_resource.properties[:problem]} !"`

| priority | add subject to send email api action make subject body dynamic expose api entities in context currently the email address and body are hard coded the goal of this task is to add a subject as well and make both the subject and body dynamic so we can use ruby style string interpolation using the api namespace resource should be exposed in this context as well so we can do something like subject api resource properties welcome body we can help you with api resource properties | 1 |
271,072 | 8,475,543,485 | IssuesEvent | 2018-10-24 19:13:21 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | Fingerbank: does not start and block PacketFence services | Priority: Critical Priority: High Type: Bug | Oct 22 10:02:39 cluster3.zammit.corp packetfence[19497]: FATAL set-env-fingerbank-conf.pl(19497): Undefined subroutine &fingerbank::Util::get_proxy_url called at /usr/local/fingerbank/collector/set-env-fingerbank-conf.pl line 52. | 2.0 | Fingerbank: does not start and block PacketFence services - Oct 22 10:02:39 cluster3.zammit.corp packetfence[19497]: FATAL set-env-fingerbank-conf.pl(19497): Undefined subroutine &fingerbank::Util::get_proxy_url called at /usr/local/fingerbank/collector/set-env-fingerbank-conf.pl line 52. | priority | fingerbank does not start and block packetfence services oct zammit corp packetfence fatal set env fingerbank conf pl undefined subroutine fingerbank util get proxy url called at usr local fingerbank collector set env fingerbank conf pl line | 1 |
391,249 | 11,571,254,935 | IssuesEvent | 2020-02-20 21:08:25 | ansible/galaxy-dev | https://api.github.com/repos/ansible/galaxy-dev | closed | QE: Test Namespace APIs | area/QE priority/high status/new type/enhancement | The Namespace API is not part of the underlying pulp-ansible API. It is exposed only on AH and should be tested under the UI.
Test Cases:
- AH-0005
- AH-0006
- AH-0007
- AH-0008
- AH-0009
- AH-0010 | 1.0 | QE: Test Namespace APIs - The Namespace API is not part of the underlying pulp-ansible API. It is exposed only on AH and should be tested under the UI.
Test Cases:
- AH-0005
- AH-0006
- AH-0007
- AH-0008
- AH-0009
- AH-0010 | priority | qe test namespace apis the namespace api is not part of the underlying pulp ansible api it is exposed only on ah and should be tested under the ui test cases ah ah ah ah ah ah | 1 |
209,409 | 7,175,317,362 | IssuesEvent | 2018-01-31 04:40:39 | morris-jason/tanker | https://api.github.com/repos/morris-jason/tanker | closed | UI Authentication and Login | component/edge component/proof component/ui lang/golang lang/vue priority/high type/feature | ### User Statement:
As a API Consumer, I want to be able to login and get an API token.
As a API Producer, I want to be able to login to the admin ui.
### Details:
Login should use basic HTTP auth for now.
Hold the jwt in a cookie for browser requests.
Edge API for generating a token should always return a json payload.
### Acceptance Criteria:
- Login to Tanker using the admin ui and edge api.
- UI shows me my edge api token.
| 1.0 | UI Authentication and Login - ### User Statement:
As a API Consumer, I want to be able to login and get an API token.
As a API Producer, I want to be able to login to the admin ui.
### Details:
Login should use basic HTTP auth for now.
Hold the jwt in a cookie for browser requests.
Edge API for generating a token should always return a json payload.
### Acceptance Criteria:
- Login to Tanker using the admin ui and edge api.
- UI shows me my edge api token.
| priority | ui authentication and login user statement as a api consumer i want to be able to login and get an api token as a api producer i want to be able to login to the admin ui details login should use basic http auth for now hold the jwt in a cookie for browser requests edge api for generating a token should always return a json payload acceptance criteria login to tanker using the admin ui and edge api ui shows me my edge api token | 1 |
768,205 | 26,957,963,318 | IssuesEvent | 2023-02-08 16:07:05 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | closed | CMYK colour code styleguide incorrect. | priority: high style bug | The CMYK code for Magenta is incorrect on the styleguide. The correct one is 0/85/50/10. | 1.0 | CMYK colour code styleguide incorrect. - The CMYK code for Magenta is incorrect on the styleguide. The correct one is 0/85/50/10. | priority | cmyk colour code styleguide incorrect the cmyk code for magenta is incorrect on the styleguide the correct one is | 1 |
367,078 | 10,833,693,711 | IssuesEvent | 2019-11-11 13:30:53 | AY1920S1-CS2103-T16-3/main | https://api.github.com/repos/AY1920S1-CS2103-T16-3/main | closed | Like memes | priority.High type.Story | - [x] Add statistics engine and manager
- [x] Label likes a meme receives on MemeCard (UI)
- [ ] Build tests for stats engine and manager | 1.0 | Like memes - - [x] Add statistics engine and manager
- [x] Label likes a meme receives on MemeCard (UI)
- [ ] Build tests for stats engine and manager | priority | like memes add statistics engine and manager label likes a meme receives on memecard ui build tests for stats engine and manager | 1 |
336,404 | 10,188,663,258 | IssuesEvent | 2019-08-11 13:02:22 | vkettools/VitDeck | https://api.github.com/repos/vkettools/VitDeck | closed | テンプレートロードのログを減らしたい。 | TemplateLoader enhancement priority:high | TemplateLoaderで現状コピーごとにログを出力しているが、テンプレートが複雑になるとログが大量に出るので一括して出力することでログ数を減らしたい。 | 1.0 | テンプレートロードのログを減らしたい。 - TemplateLoaderで現状コピーごとにログを出力しているが、テンプレートが複雑になるとログが大量に出るので一括して出力することでログ数を減らしたい。 | priority | テンプレートロードのログを減らしたい。 templateloaderで現状コピーごとにログを出力しているが、テンプレートが複雑になるとログが大量に出るので一括して出力することでログ数を減らしたい。 | 1 |
519,878 | 15,058,556,172 | IssuesEvent | 2021-02-03 23:42:32 | nlpsandbox/phi-deidentifier | https://api.github.com/repos/nlpsandbox/phi-deidentifier | closed | Block External Networking | Priority: High | We need to make sure none of the annotators in the stack can exfiltrate data to an outside server. | 1.0 | Block External Networking - We need to make sure none of the annotators in the stack can exfiltrate data to an outside server. | priority | block external networking we need to make sure none of the annotators in the stack can exfiltrate data to an outside server | 1 |
637,242 | 20,623,908,500 | IssuesEvent | 2022-03-07 20:18:17 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | opened | Add logic to automatically update VA calculated fields on Edit | Priority: High Type: Enhancement Language: Python Domain: API/ Databases | **Is your feature request related to a problem? Please describe.**
If I edit a VA, the calculated fields do not update based on my edits.
**Describe the solution you'd like**
Calculated fields should update on the VA automatically after saving an edit
| 1.0 | Add logic to automatically update VA calculated fields on Edit - **Is your feature request related to a problem? Please describe.**
If I edit a VA, the calculated fields do not update based on my edits.
**Describe the solution you'd like**
Calculated fields should update on the VA automatically after saving an edit
| priority | add logic to automatically update va calculated fields on edit is your feature request related to a problem please describe if i edit a va the calculated fields do not update based on my edits describe the solution you d like calculated fields should update on the va automatically after saving an edit | 1 |
559,116 | 16,550,389,307 | IssuesEvent | 2021-05-28 07:54:36 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | accounts.google.com - site is not usable | browser-firefox-mobile bugbug-probability-high engine-gecko priority-critical | <!-- @browser: Firefox Mobile 88.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:88.0) Gecko/88.0 Firefox/88.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74685 -->
**URL**: https://accounts.google.com/signin/oauth/consent?authuser=0
**Browser / Version**: Firefox Mobile 88.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Doesn't load. Struck at same page where loader keeps on loading.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | accounts.google.com - site is not usable - <!-- @browser: Firefox Mobile 88.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:88.0) Gecko/88.0 Firefox/88.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74685 -->
**URL**: https://accounts.google.com/signin/oauth/consent?authuser=0
**Browser / Version**: Firefox Mobile 88.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Doesn't load. Struck at same page where loader keeps on loading.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | accounts google com site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce doesn t load struck at same page where loader keeps on loading browser configuration none from with ❤️ | 1 |
599,123 | 18,265,990,339 | IssuesEvent | 2021-10-04 08:31:10 | stevenwaterman/Lexoral | https://api.github.com/repos/stevenwaterman/Lexoral | opened | Add tutorial | enhancement high priority editor | When someone opens the editor for the first time it should show them how to use Lexoral | 1.0 | Add tutorial - When someone opens the editor for the first time it should show them how to use Lexoral | priority | add tutorial when someone opens the editor for the first time it should show them how to use lexoral | 1 |
654,328 | 21,648,170,344 | IssuesEvent | 2022-05-06 06:12:03 | Jernskegg/CI-portfolio-Textabase | https://api.github.com/repos/Jernskegg/CI-portfolio-Textabase | closed | USER STORY: login as an admin | 3. High priority | As an **Admin/site owner**, I can **Login into admin panel** so that **administrate and manage users**
| 1.0 | USER STORY: login as an admin - As an **Admin/site owner**, I can **Login into admin panel** so that **administrate and manage users**
| priority | user story login as an admin as an admin site owner i can login into admin panel so that administrate and manage users | 1 |
159,317 | 6,043,549,457 | IssuesEvent | 2017-06-11 22:56:32 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Embed Block: Create aliases for all the supported blocks | Blocks Priority High [Component] Inserter | The embed block when inserted allows you to paste any supported oembed.
In addition to having a generic embed block, we should have aliases for every supported oembed, so they are searchable in the inserter. For example even though you can insert a tweet into the embed block directly and it will work, you should also be able to search the inserter and find a block called "Tweet" (or "Twitter"?), insert that, then paste the URL, even though it's technically the same block as the embed block, save for an icon and a label. | 1.0 | Embed Block: Create aliases for all the supported blocks - The embed block when inserted allows you to paste any supported oembed.
In addition to having a generic embed block, we should have aliases for every supported oembed, so they are searchable in the inserter. For example even though you can insert a tweet into the embed block directly and it will work, you should also be able to search the inserter and find a block called "Tweet" (or "Twitter"?), insert that, then paste the URL, even though it's technically the same block as the embed block, save for an icon and a label. | priority | embed block create aliases for all the supported blocks the embed block when inserted allows you to paste any supported oembed in addition to having a generic embed block we should have aliases for every supported oembed so they are searchable in the inserter for example even though you can insert a tweet into the embed block directly and it will work you should also be able to search the inserter and find a block called tweet or twitter insert that then paste the url even though it s technically the same block as the embed block save for an icon and a label | 1 |
302,786 | 9,292,425,928 | IssuesEvent | 2019-03-22 03:04:13 | ClinGen/clincoded | https://api.github.com/repos/ClinGen/clincoded | closed | Change MONDO IDs for Gene Disease records | EP request GCI R25 curation edit curator review external curator priority: high | Change MONDO IDs for two GCI entries for the Hearing Loss EP:
1. Change of the MONDO disease term for the following linked entry, to MONDO:0008975 "otospondylomegaepiphyseal dysplasia": https://curation.clinicalgenome.org/curation-central/?gdm=ba9039b2-d139-472d-90b9-fef79f9f532e
2. Change of the MONDO disease term for the following linked entry, to MONDO:0008975 "otospondylomegaepiphyseal dysplasia": https://curation.clinicalgenome.org/curation-central/?gdm=b70a32eb-5863-4859-bb1b-0696dd4d7a3f | 1.0 | Change MONDO IDs for Gene Disease records - Change MONDO IDs for two GCI entries for the Hearing Loss EP:
1. Change of the MONDO disease term for the following linked entry, to MONDO:0008975 "otospondylomegaepiphyseal dysplasia": https://curation.clinicalgenome.org/curation-central/?gdm=ba9039b2-d139-472d-90b9-fef79f9f532e
2. Change of the MONDO disease term for the following linked entry, to MONDO:0008975 "otospondylomegaepiphyseal dysplasia": https://curation.clinicalgenome.org/curation-central/?gdm=b70a32eb-5863-4859-bb1b-0696dd4d7a3f | priority | change mondo ids for gene disease records change mondo ids for two gci entries for the hearing loss ep change of the mondo disease term for the following linked entry to mondo otospondylomegaepiphyseal dysplasia change of the mondo disease term for the following linked entry to mondo otospondylomegaepiphyseal dysplasia | 1 |
709,291 | 24,373,100,048 | IssuesEvent | 2022-10-03 21:11:27 | bireme/proethos | https://api.github.com/repos/bireme/proethos | closed | Implementar la pantalla de presentación de una enmienda | task severity 1 (critical/system down) priority 1 (high) | Traer en los pasos 1, 2 y 3 de la acción de monitoreo "Presentrar una enmienda", los campos del protocolo original.
| 1.0 | Implementar la pantalla de presentación de una enmienda - Traer en los pasos 1, 2 y 3 de la acción de monitoreo "Presentrar una enmienda", los campos del protocolo original.
| priority | implementar la pantalla de presentación de una enmienda traer en los pasos y de la acción de monitoreo presentrar una enmienda los campos del protocolo original | 1 |
357,023 | 10,600,772,563 | IssuesEvent | 2019-10-10 10:48:25 | nf-core/tools | https://api.github.com/repos/nf-core/tools | closed | iGenomes paths wrong for GRCm38 | bug high-priority | These lines should point to `GRCm38` and not `GRCh37`! Does this require a patch? If it isnt spotted by the pipeline developer it could lead to issues. Hopefully, the pipeline fails because the annotation files arent consistent but possibly not worth the risk...
https://github.com/nf-core/tools/blob/2e2fe8e2bed87b9d25582811115f429de9f48e33/nf_core/pipeline-template/%7B%7Bcookiecutter.name_noslash%7D%7D/conf/igenomes.config#L27-L28 | 1.0 | iGenomes paths wrong for GRCm38 - These lines should point to `GRCm38` and not `GRCh37`! Does this require a patch? If it isnt spotted by the pipeline developer it could lead to issues. Hopefully, the pipeline fails because the annotation files arent consistent but possibly not worth the risk...
https://github.com/nf-core/tools/blob/2e2fe8e2bed87b9d25582811115f429de9f48e33/nf_core/pipeline-template/%7B%7Bcookiecutter.name_noslash%7D%7D/conf/igenomes.config#L27-L28 | priority | igenomes paths wrong for these lines should point to and not does this require a patch if it isnt spotted by the pipeline developer it could lead to issues hopefully the pipeline fails because the annotation files arent consistent but possibly not worth the risk | 1 |
617,733 | 19,403,353,020 | IssuesEvent | 2021-12-19 15:26:55 | RedGrapefruit09/JustEnoughGems | https://api.github.com/repos/RedGrapefruit09/JustEnoughGems | closed | Weapon on-strike effects | enhancement development high priority | Weapons (introduced in #7) are pretty cool, but really boring. And the solution to that is to add a config with a list of effects which will be applied when you hit something firstly on the enemy, secondly on yourself. | 1.0 | Weapon on-strike effects - Weapons (introduced in #7) are pretty cool, but really boring. And the solution to that is to add a config with a list of effects which will be applied when you hit something firstly on the enemy, secondly on yourself. | priority | weapon on strike effects weapons introduced in are pretty cool but really boring and the solution to that is to add a config with a list of effects which will be applied when you hit something firstly on the enemy secondly on yourself | 1 |
528,268 | 15,363,115,508 | IssuesEvent | 2021-03-01 20:23:16 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Fitting Improvements from SSC | High Priority MantidPlot Stale | Fitting Improvements:
- doesn't remember parameters
- doesn't remember limits
- which line is the peak? Make more obvious that the lines can be dragged.
- Expose statistics [FMP: I think this is covered by the largely unknown alg. CalculateChiSquare]
- Other ways of outputting how the fitting worked: a python dict, export HDF, etc."
| 1.0 | Fitting Improvements from SSC - Fitting Improvements:
- doesn't remember parameters
- doesn't remember limits
- which line is the peak? Make more obvious that the lines can be dragged.
- Expose statistics [FMP: I think this is covered by the largely unknown alg. CalculateChiSquare]
- Other ways of outputting how the fitting worked: a python dict, export HDF, etc."
| priority | fitting improvements from ssc fitting improvements doesn t remember parameters doesn t remember limits which line is the peak make more obvious that the lines can be dragged expose statistics other ways of outputting how the fitting worked a python dict export hdf etc | 1 |
597,854 | 18,214,029,081 | IssuesEvent | 2021-09-30 00:21:24 | lf-edge/edge-home-orchestration-go | https://api.github.com/repos/lf-edge/edge-home-orchestration-go | closed | [DataStorage] Runtime error | bug high priority | **Describe the bug**
A runtime error when edgex foundry servers are not running,
```
level=ERROR ts=2021-01-19T01:51:50.815737266Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48080/api/v1/ping\": dial tcp 127.0.0.1:48080: connect: connection refused"
level=INFO ts=2021-01-19T01:51:51.817181193Z app=datastorage source=init.go:144 msg="Check Metadata service's status by ping..."
level=INFO ts=2021-01-19T01:51:51.818287982Z app=datastorage source=init.go:144 msg="Check Data service's status by ping..."
level=ERROR ts=2021-01-19T01:51:51.822577012Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48081/api/v1/ping\": dial tcp 127.0.0.1:48081: connect: connection refused"
level=ERROR ts=2021-01-19T01:51:51.824049381Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48080/api/v1/ping\": dial tcp 127.0.0.1:48080: connect: connection refused"
level=INFO ts=2021-01-19T01:51:52.825768174Z app=datastorage source=init.go:144 msg="Check Metadata service's status by ping..."
level=INFO ts=2021-01-19T01:51:52.826907105Z app=datastorage source=init.go:144 msg="Check Data service's status by ping..."
level=ERROR ts=2021-01-19T01:51:52.830784824Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48081/api/v1/ping\": dial tcp 127.0.0.1:48081: connect: connection refused"
level=ERROR ts=2021-01-19T01:51:52.83209855Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48080/api/v1/ping\": dial tcp 127.0.0.1:48080: connect: connection refused"
INFO[2021-01-19T01:51:53Z]discovery.go:833 activeDiscovery [discoverymgr] activeDiscovery!!!
INFO[2021-01-19T01:51:53Z]discovery.go:571 func1 [deviceDetectionRoutine] edge-orchestration-3125da9e-1e9a-41aa-ac83-004725eb2d1e
level=ERROR ts=2021-01-19T01:51:53.83359109Z app=datastorage source=init.go:139 msg="dependency Metadata service checking time out"
level=ERROR ts=2021-01-19T01:51:53.834663766Z app=datastorage source=init.go:139 msg="dependency Data service checking time out"
level=INFO ts=2021-01-19T01:51:53.840074015Z app=datastorage source=httpserver.go:116 msg="Web server shutting down"
level=INFO ts=2021-01-19T01:51:53.841736032Z app=datastorage source=httpserver.go:107 msg="Web server stopped"
level=INFO ts=2021-01-19T01:51:54.341966491Z app=datastorage source=httpserver.go:118 msg="Web server shut down"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x8d1a0e]
goroutine 44 [running]:
github.com/edgexfoundry/device-sdk-go/internal/autoevent.(*manager).StopAutoEvents(0x0)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/internal/autoevent/manager.go:69 +0x4e
github.com/edgexfoundry/device-sdk-go/pkg/service.(*DeviceService).Stop(0xc0004c2780, 0xc0005ae600)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/pkg/service/service.go:134 +0x45
github.com/edgexfoundry/device-sdk-go/pkg/service.Main(0xe2dbda, 0xb, 0xe3a91d, 0x1a, 0xda46e0, 0xc0005a5de0, 0xf46e20, 0xc0005ae600, 0xc0000330a0, 0xc0005ac300, ...)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/pkg/service/main.go:69 +0x6fa
github.com/edgexfoundry/device-sdk-go/pkg/startup.Bootstrap(0xe2dbda, 0xb, 0xe3a91d, 0x1a, 0xda46e0, 0xc0005a5de0)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/pkg/startup/bootstrap.go:19 +0x117
created by github.com/lf-edge/edge-home-orchestration-go/src/controller/storagemgr.StorageImpl.StartStorage
/home/t25kim/edge-home-orchestration-go/src/controller/storagemgr/storage.go:51 +0xef
```
**To Reproduce**
1. Put necessary configuration files in /var/edge-orchestration/datastorage/
2. Run edge-home-orchestration-go
**Expected behavior**
Check the edgex foundry server in advance before starting Data Storage.
**Test environment configuration (please complete the following information):**
* Firmware version: Ubuntu 18.04
* Hardware: x86-64
* Edge Orchestration Release: Coconut
| 1.0 | [DataStorage] Runtime error - **Describe the bug**
A runtime error when edgex foundry servers are not running,
```
level=ERROR ts=2021-01-19T01:51:50.815737266Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48080/api/v1/ping\": dial tcp 127.0.0.1:48080: connect: connection refused"
level=INFO ts=2021-01-19T01:51:51.817181193Z app=datastorage source=init.go:144 msg="Check Metadata service's status by ping..."
level=INFO ts=2021-01-19T01:51:51.818287982Z app=datastorage source=init.go:144 msg="Check Data service's status by ping..."
level=ERROR ts=2021-01-19T01:51:51.822577012Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48081/api/v1/ping\": dial tcp 127.0.0.1:48081: connect: connection refused"
level=ERROR ts=2021-01-19T01:51:51.824049381Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48080/api/v1/ping\": dial tcp 127.0.0.1:48080: connect: connection refused"
level=INFO ts=2021-01-19T01:51:52.825768174Z app=datastorage source=init.go:144 msg="Check Metadata service's status by ping..."
level=INFO ts=2021-01-19T01:51:52.826907105Z app=datastorage source=init.go:144 msg="Check Data service's status by ping..."
level=ERROR ts=2021-01-19T01:51:52.830784824Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48081/api/v1/ping\": dial tcp 127.0.0.1:48081: connect: connection refused"
level=ERROR ts=2021-01-19T01:51:52.83209855Z app=datastorage source=init.go:154 msg="Get \"http://localhost:48080/api/v1/ping\": dial tcp 127.0.0.1:48080: connect: connection refused"
INFO[2021-01-19T01:51:53Z]discovery.go:833 activeDiscovery [discoverymgr] activeDiscovery!!!
INFO[2021-01-19T01:51:53Z]discovery.go:571 func1 [deviceDetectionRoutine] edge-orchestration-3125da9e-1e9a-41aa-ac83-004725eb2d1e
level=ERROR ts=2021-01-19T01:51:53.83359109Z app=datastorage source=init.go:139 msg="dependency Metadata service checking time out"
level=ERROR ts=2021-01-19T01:51:53.834663766Z app=datastorage source=init.go:139 msg="dependency Data service checking time out"
level=INFO ts=2021-01-19T01:51:53.840074015Z app=datastorage source=httpserver.go:116 msg="Web server shutting down"
level=INFO ts=2021-01-19T01:51:53.841736032Z app=datastorage source=httpserver.go:107 msg="Web server stopped"
level=INFO ts=2021-01-19T01:51:54.341966491Z app=datastorage source=httpserver.go:118 msg="Web server shut down"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x8d1a0e]
goroutine 44 [running]:
github.com/edgexfoundry/device-sdk-go/internal/autoevent.(*manager).StopAutoEvents(0x0)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/internal/autoevent/manager.go:69 +0x4e
github.com/edgexfoundry/device-sdk-go/pkg/service.(*DeviceService).Stop(0xc0004c2780, 0xc0005ae600)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/pkg/service/service.go:134 +0x45
github.com/edgexfoundry/device-sdk-go/pkg/service.Main(0xe2dbda, 0xb, 0xe3a91d, 0x1a, 0xda46e0, 0xc0005a5de0, 0xf46e20, 0xc0005ae600, 0xc0000330a0, 0xc0005ac300, ...)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/pkg/service/main.go:69 +0x6fa
github.com/edgexfoundry/device-sdk-go/pkg/startup.Bootstrap(0xe2dbda, 0xb, 0xe3a91d, 0x1a, 0xda46e0, 0xc0005a5de0)
/home/t25kim/edge-home-orchestration-go/vendor/github.com/edgexfoundry/device-sdk-go/pkg/startup/bootstrap.go:19 +0x117
created by github.com/lf-edge/edge-home-orchestration-go/src/controller/storagemgr.StorageImpl.StartStorage
/home/t25kim/edge-home-orchestration-go/src/controller/storagemgr/storage.go:51 +0xef
```
**To Reproduce**
1. Put necessary configuration files in /var/edge-orchestration/datastorage/
2. Run edge-home-orchestration-go
**Expected behavior**
Check the edgex foundry server in advance before starting Data Storage.
**Test environment configuration (please complete the following information):**
* Firmware version: Ubuntu 18.04
* Hardware: x86-64
* Edge Orchestration Release: Coconut
| priority | runtime error describe the bug a runtime error when edgex foundry servers are not running level error ts app datastorage source init go msg get dial tcp connect connection refused level info ts app datastorage source init go msg check metadata service s status by ping level info ts app datastorage source init go msg check data service s status by ping level error ts app datastorage source init go msg get dial tcp connect connection refused level error ts app datastorage source init go msg get dial tcp connect connection refused level info ts app datastorage source init go msg check metadata service s status by ping level info ts app datastorage source init go msg check data service s status by ping level error ts app datastorage source init go msg get dial tcp connect connection refused level error ts app datastorage source init go msg get dial tcp connect connection refused info discovery go activediscovery activediscovery info discovery go edge orchestration level error ts app datastorage source init go msg dependency metadata service checking time out level error ts app datastorage source init go msg dependency data service checking time out level info ts app datastorage source httpserver go msg web server shutting down level info ts app datastorage source httpserver go msg web server stopped level info ts app datastorage source httpserver go msg web server shut down panic runtime error invalid memory address or nil pointer dereference goroutine github com edgexfoundry device sdk go internal autoevent manager stopautoevents home edge home orchestration go vendor github com edgexfoundry device sdk go internal autoevent manager go github com edgexfoundry device sdk go pkg service deviceservice stop home edge home orchestration go vendor github com edgexfoundry device sdk go pkg service service go github com edgexfoundry device sdk go pkg service main home edge home orchestration go vendor github com edgexfoundry device sdk go pkg service main go github com edgexfoundry device sdk go pkg startup bootstrap home edge home orchestration go vendor github com edgexfoundry device sdk go pkg startup bootstrap go created by github com lf edge edge home orchestration go src controller storagemgr storageimpl startstorage home edge home orchestration go src controller storagemgr storage go to reproduce put necessary configuration files in var edge orchestration datastorage run edge home orchestration go expected behavior check the edgex foundry server in advance before starting data storage test environment configuration please complete the following information firmware version ubuntu hardware edge orchestration release coconut | 1 |
508,346 | 14,698,754,923 | IssuesEvent | 2021-01-04 07:11:54 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | docs.google.com - site is not usable | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64740 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://docs.google.com/forms/d/e/1FAIpQLSdATODmCll3Pznr7w1A4CIkJfHM1tUDp657Dc-XTFCtj7vRBg/formResponse
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
Radio button toggle animation gets stuck half way through
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/0a6bcb2d-caf6-4175-937d-1c8a50b62ead.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201223151005</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/3a23ab8c-51cf-492f-bbba-52284b0f9b73)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | docs.google.com - site is not usable - <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64740 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://docs.google.com/forms/d/e/1FAIpQLSdATODmCll3Pznr7w1A4CIkJfHM1tUDp657Dc-XTFCtj7vRBg/formResponse
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
Radio button toggle animation gets stuck half way through
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/0a6bcb2d-caf6-4175-937d-1c8a50b62ead.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201223151005</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/3a23ab8c-51cf-492f-bbba-52284b0f9b73)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | docs google com site is not usable url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description buttons or links not working steps to reproduce radio button toggle animation gets stuck half way through view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
590,210 | 17,773,852,929 | IssuesEvent | 2021-08-30 16:35:47 | Franckyi/IBE-Editor | https://api.github.com/repos/Franckyi/IBE-Editor | closed | [BUG] Conflicts With Inventory Profiles Next | Type: Bug Minecraft: 1.17 Priority: High Loader: Fabric | When I used Inventory Profiles Next to sort a chest, the game crashed.
[crash-2021-08-30_18.59.30-client.txt](https://github.com/Franckyi/IBE-Editor/files/7076267/crash-2021-08-30_18.59.30-client.txt) | 1.0 | [BUG] Conflicts With Inventory Profiles Next - When I used Inventory Profiles Next to sort a chest, the game crashed.
[crash-2021-08-30_18.59.30-client.txt](https://github.com/Franckyi/IBE-Editor/files/7076267/crash-2021-08-30_18.59.30-client.txt) | priority | conflicts with inventory profiles next when i used inventory profiles next to sort a chest the game crashed | 1 |
662,602 | 22,145,591,931 | IssuesEvent | 2022-06-03 11:41:13 | asastats/channel | https://api.github.com/repos/asastats/channel | closed | Choice Coin shows incorrect value | bug high priority addressed | This is due to tokens sent for governance voting are still being accounted for whereas token were returned to wallet an hour ago. | 1.0 | Choice Coin shows incorrect value - This is due to tokens sent for governance voting are still being accounted for whereas token were returned to wallet an hour ago. | priority | choice coin shows incorrect value this is due to tokens sent for governance voting are still being accounted for whereas token were returned to wallet an hour ago | 1 |
658,230 | 21,881,531,916 | IssuesEvent | 2022-05-19 14:42:46 | manbuegom/tfg_project_22 | https://api.github.com/repos/manbuegom/tfg_project_22 | opened | Fix: Redirección y carga post llamada api rest | bug high priority | Añadir una pantalla de carga posterior a realizar alguna transación con el servidor. | 1.0 | Fix: Redirección y carga post llamada api rest - Añadir una pantalla de carga posterior a realizar alguna transación con el servidor. | priority | fix redirección y carga post llamada api rest añadir una pantalla de carga posterior a realizar alguna transación con el servidor | 1 |
747,651 | 26,094,436,211 | IssuesEvent | 2022-12-26 16:49:03 | mirayiyidogan/swe_573 | https://api.github.com/repos/mirayiyidogan/swe_573 | closed | Learn Django! | Dependency Priority: High training | Need to watch tutorials about django and do some practise before get into business | 1.0 | Learn Django! - Need to watch tutorials about django and do some practise before get into business | priority | learn django need to watch tutorials about django and do some practise before get into business | 1 |
286,437 | 8,788,163,527 | IssuesEvent | 2018-12-20 21:10:54 | thisisodense/tio-web | https://api.github.com/repos/thisisodense/tio-web | closed | Forkerte overskrifter i googles søgeresultat | backend bug frontend high priority | Den sidetitel, som blot burde gælde for forsiden, bliver ved en fejl overført til alle artiklerne også.

@mimse @terkelskibbylarsen @terkellarsen
| 1.0 | Forkerte overskrifter i googles søgeresultat - Den sidetitel, som blot burde gælde for forsiden, bliver ved en fejl overført til alle artiklerne også.

@mimse @terkelskibbylarsen @terkellarsen
| priority | forkerte overskrifter i googles søgeresultat den sidetitel som blot burde gælde for forsiden bliver ved en fejl overført til alle artiklerne også mimse terkelskibbylarsen terkellarsen | 1 |
413,835 | 12,092,821,607 | IssuesEvent | 2020-04-19 17:07:04 | Icyr/DnDApp | https://api.github.com/repos/Icyr/DnDApp | closed | Add races. | new feature priority: high | ~~- [ ] create Race Firebase Repository-~~
- [x] create Race Local Repository
- [x] prepopulate DB with races on first login/startup
- [x] add race to character
- [x] character creation: Race screen
- [x] update character view and character list
Our characters need races.
#4 #5 #6 must be completed before this task.
- Add a screen to character creation flow: Race
- Select one of the predefined races (fetched from DB)
- When race is selected, display a small description for that race
- Predefined races must be added to DB on first login/startup
- Display race in character view
- Update view in character list to display race | 1.0 | Add races. - ~~- [ ] create Race Firebase Repository-~~
- [x] create Race Local Repository
- [x] prepopulate DB with races on first login/startup
- [x] add race to character
- [x] character creation: Race screen
- [x] update character view and character list
Our characters need races.
#4 #5 #6 must be completed before this task.
- Add a screen to character creation flow: Race
- Select one of the predefined races (fetched from DB)
- When race is selected, display a small description for that race
- Predefined races must be added to DB on first login/startup
- Display race in character view
- Update view in character list to display race | priority | add races create race firebase repository create race local repository prepopulate db with races on first login startup add race to character character creation race screen update character view and character list our characters need races must be completed before this task add a screen to character creation flow race select one of the predefined races fetched from db when race is selected display a small description for that race predefined races must be added to db on first login startup display race in character view update view in character list to display race | 1 |
690,936 | 23,678,206,206 | IssuesEvent | 2022-08-28 12:09:39 | htcfreek/AutoIT-Scripts | https://api.github.com/repos/htcfreek/AutoIT-Scripts | closed | [GetDiskInfoFromWmi] Var not declared warning | bug Script-GetDiskInfoFromWmi priority-high | Fix var not declared warning on the variables:
- $sDiskHeader
- $sPartitionHeader
```
AutoIt3 Syntax Checker v3.3.14.5 Copyright (c) 2007-2013 Tylo & AutoIt Team
"D:\(...)\GetDiskInfoFromWmi.au3"(86,323) : warning: $sDiskHeader possibly not declared/created yet
$sDiskHeader = "DiskNum" & "||" & "DiskDeviceID" & "||" & "DiskManufacturer" & "||" & "DiskModel" & "||" & "DiskInterfaceType" & "||" & "DiskMediaType" & "||" & "DiskSerialNumber" & "||" & "DiskState" & "||" & "DiskSize" & "||" & "DiskInitType" & "||" & "DiskPartitionCount" & "||" & "WindowsRunningOnDisk (SystemDrive)"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
"D:\(...)\GetDiskInfoFromWmi.au3"(88,376) : warning: $sPartitionHeader possibly not declared/created yet
$sPartitionHeader = "DiskNum" & "||" & "PartitionNum" & "||" & "PartitionID" & "||" & "PartitionType" & "||" & "PartitionIsPrimary" & "||" & "PartitionIsBootPartition" & "||" & "PartitionLetter" & "||" & "PartitionLabel" & "||" & "PartitionFileSystem" & "||" & "PartitionSizeTotal" & "||" & "PartitionSizeUsed" & "||" & "PartitionSizeFree" & "||" & "PartitionIsSystemDrive"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
D:\(...) - 0 error(s), 2 warning(s)
```
| 1.0 | [GetDiskInfoFromWmi] Var not declared warning - Fix var not declared warning on the variables:
- $sDiskHeader
- $sPartitionHeader
```
AutoIt3 Syntax Checker v3.3.14.5 Copyright (c) 2007-2013 Tylo & AutoIt Team
"D:\(...)\GetDiskInfoFromWmi.au3"(86,323) : warning: $sDiskHeader possibly not declared/created yet
$sDiskHeader = "DiskNum" & "||" & "DiskDeviceID" & "||" & "DiskManufacturer" & "||" & "DiskModel" & "||" & "DiskInterfaceType" & "||" & "DiskMediaType" & "||" & "DiskSerialNumber" & "||" & "DiskState" & "||" & "DiskSize" & "||" & "DiskInitType" & "||" & "DiskPartitionCount" & "||" & "WindowsRunningOnDisk (SystemDrive)"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
"D:\(...)\GetDiskInfoFromWmi.au3"(88,376) : warning: $sPartitionHeader possibly not declared/created yet
$sPartitionHeader = "DiskNum" & "||" & "PartitionNum" & "||" & "PartitionID" & "||" & "PartitionType" & "||" & "PartitionIsPrimary" & "||" & "PartitionIsBootPartition" & "||" & "PartitionLetter" & "||" & "PartitionLabel" & "||" & "PartitionFileSystem" & "||" & "PartitionSizeTotal" & "||" & "PartitionSizeUsed" & "||" & "PartitionSizeFree" & "||" & "PartitionIsSystemDrive"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
D:\(...) - 0 error(s), 2 warning(s)
```
| priority | var not declared warning fix var not declared warning on the variables sdiskheader spartitionheader syntax checker copyright c tylo autoit team d getdiskinfofromwmi warning sdiskheader possibly not declared created yet sdiskheader disknum diskdeviceid diskmanufacturer diskmodel diskinterfacetype diskmediatype diskserialnumber diskstate disksize diskinittype diskpartitioncount windowsrunningondisk systemdrive d getdiskinfofromwmi warning spartitionheader possibly not declared created yet spartitionheader disknum partitionnum partitionid partitiontype partitionisprimary partitionisbootpartition partitionletter partitionlabel partitionfilesystem partitionsizetotal partitionsizeused partitionsizefree partitionissystemdrive d error s warning s | 1 |
131,730 | 5,164,978,588 | IssuesEvent | 2017-01-17 12:16:58 | snaiperskaya96/test-import-repo | https://api.github.com/repos/snaiperskaya96/test-import-repo | closed | Warehouse - Use a single template view | Accepted High Priority Refactor | https://trello.com/c/kGTwG0Pt/167-warehouse-use-a-single-template-view
Currently using three: `goodsin`, `goodsin2`, and `skeleton`. | 1.0 | Warehouse - Use a single template view - https://trello.com/c/kGTwG0Pt/167-warehouse-use-a-single-template-view
Currently using three: `goodsin`, `goodsin2`, and `skeleton`. | priority | warehouse use a single template view currently using three goodsin and skeleton | 1 |
266,797 | 8,375,284,780 | IssuesEvent | 2018-10-05 15:56:14 | hypothesis/lms | https://api.github.com/repos/hypothesis/lms | closed | Get Moodle, Blackboard and Sakai test sites | :bangbang: High Priority :bangbang: | Right now the only LMS that we have access to a test site for is Canvas. To make sure that new features work in other LMS's, and that existing features don't get broken in other LMS's, we need access to a test site for each LMS. The app is used in Moodle, Blackboard, Sakai and D2L.
See Slack thread: https://hypothes-is.slack.com/archives/CBN3DGW02/p1538745414000100 | 1.0 | Get Moodle, Blackboard and Sakai test sites - Right now the only LMS that we have access to a test site for is Canvas. To make sure that new features work in other LMS's, and that existing features don't get broken in other LMS's, we need access to a test site for each LMS. The app is used in Moodle, Blackboard, Sakai and D2L.
See Slack thread: https://hypothes-is.slack.com/archives/CBN3DGW02/p1538745414000100 | priority | get moodle blackboard and sakai test sites right now the only lms that we have access to a test site for is canvas to make sure that new features work in other lms s and that existing features don t get broken in other lms s we need access to a test site for each lms the app is used in moodle blackboard sakai and see slack thread | 1 |
731,317 | 25,209,855,379 | IssuesEvent | 2022-11-14 02:08:40 | Australian-Genomics/CTRL | https://api.github.com/repos/Australian-Genomics/CTRL | closed | Make CSV export from admin portal include question text instead of question ID | priority: high difficulty: medium | Exports should also include coded answers (e.g. `1`, `2`, instead of `yes`, `no`).
Something unexpected mentioned while @rosiejbrown described this issue is that, currently, question IDs seem to change depending on the user. This might indicate that CSV exports incorrectly contain answer IDs. This should be investigated as part of this issue and subsequent issues should be raised if needed. | 1.0 | Make CSV export from admin portal include question text instead of question ID - Exports should also include coded answers (e.g. `1`, `2`, instead of `yes`, `no`).
Something unexpected mentioned while @rosiejbrown described this issue is that, currently, question IDs seem to change depending on the user. This might indicate that CSV exports incorrectly contain answer IDs. This should be investigated as part of this issue and subsequent issues should be raised if needed. | priority | make csv export from admin portal include question text instead of question id exports should also include coded answers e g instead of yes no something unexpected mentioned while rosiejbrown described this issue is that currently question ids seem to change depending on the user this might indicate that csv exports incorrectly contain answer ids this should be investigated as part of this issue and subsequent issues should be raised if needed | 1 |
567,238 | 16,851,181,883 | IssuesEvent | 2021-06-20 14:43:46 | notawakestudio/NUSConnect | https://api.github.com/repos/notawakestudio/NUSConnect | opened | WK 7 SPRINT | priority.High | Docs:
- acceptance test: add a list of descriptions for features to be tested
- a paragraph to explain next month feature list
- user test feedback gathering: survey forms + record their feedback
- some edits to the readme
- poster
- add some more log activities
Tests:
- E2E testing (YL)
Gamification
- exp and badge ,Exp system (YL)
- notification about completion of the task
Module
- edit module schedule (JX)
Quiz
- New question type: Lab
- Get relevant post for each question
Forum
- Make question from post
- Make wiki from post/reply
| 1.0 | WK 7 SPRINT - Docs:
- acceptance test: add a list of descriptions for features to be tested
- a paragraph to explain next month feature list
- user test feedback gathering: survey forms + record their feedback
- some edits to the readme
- poster
- add some more log activities
Tests:
- E2E testing (YL)
Gamification
- exp and badge ,Exp system (YL)
- notification about completion of the task
Module
- edit module schedule (JX)
Quiz
- New question type: Lab
- Get relevant post for each question
Forum
- Make question from post
- Make wiki from post/reply
| priority | wk sprint docs acceptance test add a list of descriptions for features to be tested a paragraph to explain next month feature list user test feedback gathering survey forms record their feedback some edits to the readme poster add some more log activities tests testing yl gamification exp and badge exp system yl notification about completion of the task module edit module schedule jx quiz new question type lab get relevant post for each question forum make question from post make wiki from post reply | 1 |
752,763 | 26,324,233,777 | IssuesEvent | 2023-01-10 04:17:14 | super-cooper/memebot | https://api.github.com/repos/super-cooper/memebot | closed | Add a v2 Twitter API handle | feature high-priority | **Is your feature request related to a problem? Please describe.**
It was discovered during the investigation of #67 that Twitter API v1.1 does not support the lookup of child tweets in a thread. We are able to do this in API v2, so we must add a v2 handle to memebot.
**Describe the solution you'd like**
We should add a separate v2 handle, since v2 is a different API and is not (yet) a replacement for v1.1. We should all get new tokens that are usable with both APIs.
**Describe alternatives you've considered**
There were several other options on how to limit our scope to API v1.1 and still work around its issues to produce some resemblance of the previous feature. They all ended up sort of messy and spammy, and ultimately it is just easier and better to use the v2 API.
**Additional context**
Currently, #67 is the only issue (that we know of) which requires v2. As far as we know, video media links are unsupported by v2, which would make #97 not possible with v2. As far as we know, this is the only feature which is broken by v2.
| 1.0 | Add a v2 Twitter API handle - **Is your feature request related to a problem? Please describe.**
It was discovered during the investigation of #67 that Twitter API v1.1 does not support the lookup of child tweets in a thread. We are able to do this in API v2, so we must add a v2 handle to memebot.
**Describe the solution you'd like**
We should add a separate v2 handle, since v2 is a different API and is not (yet) a replacement for v1.1. We should all get new tokens that are usable with both APIs.
**Describe alternatives you've considered**
There were several other options on how to limit our scope to API v1.1 and still work around its issues to produce some resemblance of the previous feature. They all ended up sort of messy and spammy, and ultimately it is just easier and better to use the v2 API.
**Additional context**
Currently, #67 is the only issue (that we know of) which requires v2. As far as we know, video media links are unsupported by v2, which would make #97 not possible with v2. As far as we know, this is the only feature which is broken by v2.
| priority | add a twitter api handle is your feature request related to a problem please describe it was discovered during the investigation of that twitter api does not support the lookup of child tweets in a thread we are able to do this in api so we must add a handle to memebot describe the solution you d like we should add a separate handle since is a different api and is not yet a replacement for we should all get new tokens that are usable with both apis describe alternatives you ve considered there were several other options on how to limit our scope to api and still work around its issues to produce some resemblance of the previous feature they all ended up sort of messy and spammy and ultimately it is just easier and better to use the api additional context currently is the only issue that we know of which requires as far as we know video media links are unsupported by which would make not possible with as far as we know this is the only feature which is broken by | 1 |
645,058 | 20,993,522,407 | IssuesEvent | 2022-03-29 11:33:05 | tempus-finance/tempus-app | https://api.github.com/repos/tempus-finance/tempus-app | opened | The app breaks when typing in deposit value | bug high priority | **Description**
-
**To Reproduce**
1. Navigate to Staging environment.
2. Expand ETH pool.
3. Manage.
4. Deposit.
5. Pick ETH in "From" dropdown.
6. Enter 1111111 value.
**Expected behavior**
User can type in any value.
**Actual behavior**
The app breaks.
**Screenshots**

**Environment**
Operating System: Ubuntu
Browser: Chrome
Wallet: MetaMask
Network: Fantom
URL: Staging environment
**Additional context**
Sometimes the app breaks after typing just 2 digits. | 1.0 | The app breaks when typing in deposit value - **Description**
-
**To Reproduce**
1. Navigate to Staging environment.
2. Expand ETH pool.
3. Manage.
4. Deposit.
5. Pick ETH in "From" dropdown.
6. Enter 1111111 value.
**Expected behavior**
User can type in any value.
**Actual behavior**
The app breaks.
**Screenshots**

**Environment**
Operating System: Ubuntu
Browser: Chrome
Wallet: MetaMask
Network: Fantom
URL: Staging environment
**Additional context**
Sometimes the app breaks after typing just 2 digits. | priority | the app breaks when typing in deposit value description to reproduce navigate to staging environment expand eth pool manage deposit pick eth in from dropdown enter value expected behavior user can type in any value actual behavior the app breaks screenshots environment operating system ubuntu browser chrome wallet metamask network fantom url staging environment additional context sometimes the app breaks after typing just digits | 1 |
253,619 | 8,058,449,725 | IssuesEvent | 2018-08-02 18:30:35 | dojot/dojot | https://api.github.com/repos/dojot/dojot | opened | [GUI] Device Detail - boolean data "false" or "0" are not shown | Priority:High Team:Frontend Type:Bug | When the boolean attribute is selected, the values **false** or **0** are not shown.
Note: these data appear in the history

Baseline affected: **0.3.0-nightly20180712** | 1.0 | [GUI] Device Detail - boolean data "false" or "0" are not shown - When the boolean attribute is selected, the values **false** or **0** are not shown.
Note: these data appear in the history

Baseline affected: **0.3.0-nightly20180712** | priority | device detail boolean data false or are not shown when the boolean attribute is selected the values false or are not shown note these data appear in the history baseline affected | 1 |
739,540 | 25,601,451,052 | IssuesEvent | 2022-12-01 20:36:15 | wso2/api-manager | https://api.github.com/repos/wso2/api-manager | closed | Upgrade HTTP Core with the Fixes done | Type/Task Priority/Highest Component/APIM Component/MI 4.2.0-alpha | ### Description
$subject
We've forked HTTP Core and done several fixes, but these are not available in master. Need to upgrade to latest apache http core version with the fixes or fork and use this in master.
### Affected Component
APIM
### Version
4.2.0
### Related Issues
https://github.com/wso2-enterprise/wso2-apim-internal/issues/142
### Suggested Labels
_No response_ | 1.0 | Upgrade HTTP Core with the Fixes done - ### Description
$subject
We've forked HTTP Core and done several fixes, but these are not available in master. Need to upgrade to latest apache http core version with the fixes or fork and use this in master.
### Affected Component
APIM
### Version
4.2.0
### Related Issues
https://github.com/wso2-enterprise/wso2-apim-internal/issues/142
### Suggested Labels
_No response_ | priority | upgrade http core with the fixes done description subject we ve forked http core and done several fixes but these are not available in master need to upgrade to latest apache http core version with the fixes or fork and use this in master affected component apim version related issues suggested labels no response | 1 |
806,504 | 29,831,117,811 | IssuesEvent | 2023-06-18 09:34:43 | fedora-infra/bodhi | https://api.github.com/repos/fedora-infra/bodhi | closed | Update code to support sqlalchemy 2.0 | High priority Help needed high-trouble high-gain | We need to get rid of deprecated method since sqlalchemy 1.4 to support sqlalchemy 2.0. | 1.0 | Update code to support sqlalchemy 2.0 - We need to get rid of deprecated method since sqlalchemy 1.4 to support sqlalchemy 2.0. | priority | update code to support sqlalchemy we need to get rid of deprecated method since sqlalchemy to support sqlalchemy | 1 |
554,617 | 16,434,822,184 | IssuesEvent | 2021-05-20 08:03:57 | sopra-fs21-group-09/sopra-fs21-group-09-client | https://api.github.com/repos/sopra-fs21-group-09/sopra-fs21-group-09-client | closed | Create MyModule Page | Frontend high priority task | This is part of User Story #9
Page should be empty at first
Add "Join a Module" button
Estimate: 2h
ScrollBar needs fixing and backend connection missing | 1.0 | Create MyModule Page - This is part of User Story #9
Page should be empty at first
Add "Join a Module" button
Estimate: 2h
ScrollBar needs fixing and backend connection missing | priority | create mymodule page this is part of user story page should be empty at first add join a module button estimate scrollbar needs fixing and backend connection missing | 1 |
225,902 | 7,496,100,087 | IssuesEvent | 2018-04-08 05:31:29 | CS2103JAN2018-W13-B4/main | https://api.github.com/repos/CS2103JAN2018-W13-B4/main | closed | Select command has no purpose | priority.high type.bug | From what I can see, all `select` does is highlight the entry.
<sub>[original: nus-cs2103-AY1718S2/pe-round1#1002]</sub>
Issue created by: @shanwpf | 1.0 | Select command has no purpose - From what I can see, all `select` does is highlight the entry.
<sub>[original: nus-cs2103-AY1718S2/pe-round1#1002]</sub>
Issue created by: @shanwpf | priority | select command has no purpose from what i can see all select does is highlight the entry issue created by shanwpf | 1 |
719,176 | 24,749,672,524 | IssuesEvent | 2022-10-21 12:46:02 | jphacks/E_2202 | https://api.github.com/repos/jphacks/E_2202 | closed | エラーで抽出した行列番号も含めて返すようにする | ready for review priority: high Back End | エラーで抽出した行と列を返すようにする
```
{
"result": [
{
"row_idx": int,
"col_idxes": {"start": int, "end": int}, // start <= range <= end
"text": str,
"type": ERROR_MESSAGE | LIBRARY_NAME
},
...
]
}
``` | 1.0 | エラーで抽出した行列番号も含めて返すようにする - エラーで抽出した行と列を返すようにする
```
{
"result": [
{
"row_idx": int,
"col_idxes": {"start": int, "end": int}, // start <= range <= end
"text": str,
"type": ERROR_MESSAGE | LIBRARY_NAME
},
...
]
}
``` | priority | エラーで抽出した行列番号も含めて返すようにする エラーで抽出した行と列を返すようにする result row idx int col idxes start int end int start range end text str type error message library name | 1 |
724,189 | 24,919,957,566 | IssuesEvent | 2022-10-30 20:54:11 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | [FSDP] Full State Dict unable to save models, assert failure fqn in state dict. 10/23+ nightlies | high priority triage review oncall: distributed module: fsdp | ### 🐛 Describe the bug
1 - Run FSDP using T5 (HF or modified)
2 - Attempt to save model checkpoint using Full State Dict.
3 - Receive assert :
AssertionError: FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict
but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param',
'_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=1.
This worked as of 10/13 nightly, but now fails with 1024 and 1026 nightlies to help pin down the cause.
Full trace:
Traceback (most recent call last):
File "/home/ubuntu/transformer_framework/main_training.py", line 407, in <module>
fsdp_main()
File "/home/ubuntu/transformer_framework/main_training.py", line 344, in fsdp_main
model_checkpointing.save_model_checkpoint(
File "/home/ubuntu/transformer_framework/model_checkpointing/checkpoint_handler.py", line 135, in save_model_checkpoint
cpu_state = model.state_dict()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
Traceback (most recent call last):
File "/home/ubuntu/transformer_framework/main_training.py", line 407, in <module>
fsdp_main()
File "/home/ubuntu/transformer_framework/main_training.py", line 344, in fsdp_main
model_checkpointing.save_model_checkpoint(
File "/home/ubuntu/transformer_framework/model_checkpointing/checkpoint_handler.py", line 135, in save_model_checkpoint
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
cpu_state = model.state_dict()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
Traceback (most recent call last):
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
File "/home/ubuntu/transformer_framework/main_training.py", line 407, in <module>
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
fsdp_main()
File "/home/ubuntu/transformer_framework/main_training.py", line 344, in fsdp_main
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
model_checkpointing.save_model_checkpoint(
File "/home/ubuntu/transformer_framework/model_checkpointing/checkpoint_handler.py", line 135, in save_model_checkpoint
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
cpu_state = model.state_dict()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
[Previous line repeated 1 more time]
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
[Previous line repeated 1 more time]
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1697, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 365, in _post_state_dict_hook
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1697, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
processed_state_dict = _post_state_dict_hook_fn[fsdp_module._state_dict_type](
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 85, in _full_post_state_dict_hook
assert fqn in state_dict, (
AssertionError : hook_result = hook(self, destination, prefix, local_metadata)FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param', '_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=3.
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
[Previous line repeated 1 more time]
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
return func(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 365, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_module._state_dict_type](
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 85, in _full_post_state_dict_hook
assert fqn in state_dict, (
AssertionError: FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param', '_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=2.
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1697, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 365, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_module._state_dict_type](
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 85, in _full_post_state_dict_hook
assert fqn in state_dict, (
AssertionError: FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param', '_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=1.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 28608 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 28610 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 28611 closing signal SIGTERM
### Versions
Collecting environment information...
PyTorch version: 1.14.0.dev20221026+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:58:50) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.14.0.dev20221026+cu116
[pip3] torch-model-archiver==0.5.3b20220226
[pip3] torch-workflow-archiver==0.2.4b20220513
[pip3] torchaudio==0.12.1
[pip3] torchserve==0.6.0b20220513
[pip3] torchvision==0.15.0.dev20221026+cu116
[pip3] vit-pytorch==0.37.1
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] captum 0.5.0 0 pytorch
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.4 py39h3d75532_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.14.0.dev20221026+cu116 pypi_0 pypi
[conda] torch-model-archiver 0.5.3 py39_0 pytorch
[conda] torch-workflow-archiver 0.2.4 py39_0 pytorch
[conda] torchaudio 0.12.1 py39_cu116 pytorch
[conda] torchserve 0.6.0 py39_0 pytorch
[conda] torchvision 0.15.0.dev20221026+cu116 pypi_0 pypi
[conda] vit-pytorch 0.37.1 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu | 1.0 | [FSDP] Full State Dict unable to save models, assert failure fqn in state dict. 10/23+ nightlies - ### 🐛 Describe the bug
1 - Run FSDP using T5 (HF or modified)
2 - Attempt to save model checkpoint using Full State Dict.
3 - Receive assert :
AssertionError: FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict
but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param',
'_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=1.
This worked as of 10/13 nightly, but now fails with 1024 and 1026 nightlies to help pin down the cause.
Full trace:
Traceback (most recent call last):
File "/home/ubuntu/transformer_framework/main_training.py", line 407, in <module>
fsdp_main()
File "/home/ubuntu/transformer_framework/main_training.py", line 344, in fsdp_main
model_checkpointing.save_model_checkpoint(
File "/home/ubuntu/transformer_framework/model_checkpointing/checkpoint_handler.py", line 135, in save_model_checkpoint
cpu_state = model.state_dict()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
Traceback (most recent call last):
File "/home/ubuntu/transformer_framework/main_training.py", line 407, in <module>
fsdp_main()
File "/home/ubuntu/transformer_framework/main_training.py", line 344, in fsdp_main
model_checkpointing.save_model_checkpoint(
File "/home/ubuntu/transformer_framework/model_checkpointing/checkpoint_handler.py", line 135, in save_model_checkpoint
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
cpu_state = model.state_dict()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
Traceback (most recent call last):
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
File "/home/ubuntu/transformer_framework/main_training.py", line 407, in <module>
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
fsdp_main()
File "/home/ubuntu/transformer_framework/main_training.py", line 344, in fsdp_main
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
model_checkpointing.save_model_checkpoint(
File "/home/ubuntu/transformer_framework/model_checkpointing/checkpoint_handler.py", line 135, in save_model_checkpoint
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
cpu_state = model.state_dict()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
[Previous line repeated 1 more time]
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
[Previous line repeated 1 more time]
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1697, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 365, in _post_state_dict_hook
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1697, in state_dict
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in state_dict
processed_state_dict = _post_state_dict_hook_fn[fsdp_module._state_dict_type](
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 85, in _full_post_state_dict_hook
assert fqn in state_dict, (
AssertionError : hook_result = hook(self, destination, prefix, local_metadata)FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param', '_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=3.
module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
[Previous line repeated 1 more time]
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2326, in state_dict
return func(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 365, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_module._state_dict_type](
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 85, in _full_post_state_dict_hook
assert fqn in state_dict, (
AssertionError: FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param', '_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=2.
state_dict = super().state_dict(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1697, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 365, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_module._state_dict_type](
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 85, in _full_post_state_dict_hook
assert fqn in state_dict, (
AssertionError: FSDP assumes _fsdp_wrapped_module.encoder.block.0.layer.0.SelfAttention.q.weight is in the state_dict but the state_dict only has odict_keys(['_fsdp_wrapped_module._flat_param', '_fsdp_wrapped_module.encoder.block.0._flat_param']). prefix=_fsdp_wrapped_module.encoder.block.0., module_name=layer.0.SelfAttention.q. param_name=weight rank=1.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 28608 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 28610 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 28611 closing signal SIGTERM
### Versions
Collecting environment information...
PyTorch version: 1.14.0.dev20221026+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:58:50) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.14.0.dev20221026+cu116
[pip3] torch-model-archiver==0.5.3b20220226
[pip3] torch-workflow-archiver==0.2.4b20220513
[pip3] torchaudio==0.12.1
[pip3] torchserve==0.6.0b20220513
[pip3] torchvision==0.15.0.dev20221026+cu116
[pip3] vit-pytorch==0.37.1
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] captum 0.5.0 0 pytorch
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.4 py39h3d75532_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.14.0.dev20221026+cu116 pypi_0 pypi
[conda] torch-model-archiver 0.5.3 py39_0 pytorch
[conda] torch-workflow-archiver 0.2.4 py39_0 pytorch
[conda] torchaudio 0.12.1 py39_cu116 pytorch
[conda] torchserve 0.6.0 py39_0 pytorch
[conda] torchvision 0.15.0.dev20221026+cu116 pypi_0 pypi
[conda] vit-pytorch 0.37.1 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu | priority | full state dict unable to save models assert failure fqn in state dict nightlies 🐛 describe the bug run fsdp using hf or modified attempt to save model checkpoint using full state dict receive assert assertionerror fsdp assumes fsdp wrapped module encoder block layer selfattention q weight is in the state dict but the state dict only has odict keys fsdp wrapped module flat param fsdp wrapped module encoder block flat param prefix fsdp wrapped module encoder block module name layer selfattention q param name weight rank this worked as of nightly but now fails with and nightlies to help pin down the cause full trace traceback most recent call last file home ubuntu transformer framework main training py line in fsdp main file home ubuntu transformer framework main training py line in fsdp main model checkpointing save model checkpoint file home ubuntu transformer framework model checkpointing checkpoint handler py line in save model checkpoint cpu state model state dict file opt conda envs pytorch lib site packages torch distributed fsdp fully sharded data parallel py line in state dict traceback most recent call last file home ubuntu transformer framework main training py line in fsdp main file home ubuntu transformer framework main training py line in fsdp main model checkpointing save model checkpoint file home ubuntu transformer framework model checkpointing checkpoint handler py line in save model checkpoint state dict super state dict args kwargs file opt conda envs pytorch lib site packages torch nn modules module py line in state dict cpu state model state dict file opt conda envs pytorch lib site packages torch distributed fsdp fully sharded data parallel py line in state dict traceback most recent call last module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch nn modules module py line in state dict file home ubuntu transformer framework main training py line in state dict super state dict args kwargs file opt conda envs pytorch lib site packages torch nn modules module py line in state dict fsdp main file home ubuntu transformer framework main training py line in fsdp main module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch nn modules module py line in state dict model checkpointing save model checkpoint file home ubuntu transformer framework model checkpointing checkpoint handler py line in save model checkpoint module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch nn modules module py line in state dict cpu state model state dict file opt conda envs pytorch lib site packages torch distributed fsdp fully sharded data parallel py line in state dict module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch distributed fsdp fully sharded data parallel py line in state dict module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch nn modules module py line in state dict state dict super state dict args kwargs file opt conda envs pytorch lib site packages torch nn modules module py line in state dict module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch distributed fsdp fully sharded data parallel py line in state dict state dict super state dict args kwargs file opt conda envs pytorch lib site packages torch nn modules module py line in state dict module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch nn modules module py line in state dict hook result hook self destination prefix local metadata file opt conda envs pytorch lib site packages torch autograd grad mode py line in decorate context return func args kwargs state dict super state dict args kwargs file opt conda envs pytorch lib site packages torch distributed fsdp state dict utils py line in post state dict hook file opt conda envs pytorch lib site packages torch nn modules module py line in state dict module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch nn modules module py line in state dict processed state dict post state dict hook fn file opt conda envs pytorch lib site packages torch distributed fsdp state dict utils py line in full post state dict hook assert fqn in state dict assertionerror hook result hook self destination prefix local metadata fsdp assumes fsdp wrapped module encoder block layer selfattention q weight is in the state dict but the state dict only has odict keys prefix fsdp wrapped module encoder block module name layer selfattention q param name weight rank module state dict destination destination prefix prefix name keep vars keep vars file opt conda envs pytorch lib site packages torch autograd grad mode py line in decorate context file opt conda envs pytorch lib site packages torch distributed fsdp fully sharded data parallel py line in state dict return func args kwargs file opt conda envs pytorch lib site packages torch distributed fsdp state dict utils py line in post state dict hook processed state dict post state dict hook fn file opt conda envs pytorch lib site packages torch distributed fsdp state dict utils py line in full post state dict hook assert fqn in state dict assertionerror fsdp assumes fsdp wrapped module encoder block layer selfattention q weight is in the state dict but the state dict only has odict keys prefix fsdp wrapped module encoder block module name layer selfattention q param name weight rank state dict super state dict args kwargs file opt conda envs pytorch lib site packages torch nn modules module py line in state dict hook result hook self destination prefix local metadata file opt conda envs pytorch lib site packages torch autograd grad mode py line in decorate context return func args kwargs file opt conda envs pytorch lib site packages torch distributed fsdp state dict utils py line in post state dict hook processed state dict post state dict hook fn file opt conda envs pytorch lib site packages torch distributed fsdp state dict utils py line in full post state dict hook assert fqn in state dict assertionerror fsdp assumes fsdp wrapped module encoder block layer selfattention q weight is in the state dict but the state dict only has odict keys prefix fsdp wrapped module encoder block module name layer selfattention q param name weight rank warning torch distributed elastic multiprocessing api sending process closing signal sigterm warning torch distributed elastic multiprocessing api sending process closing signal sigterm warning torch distributed elastic multiprocessing api sending process closing signal sigterm versions collecting environment information pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version version libc version glibc python version packaged by conda forge main may bit runtime python platform linux aws with is cuda available true cuda runtime version gpu models and configuration gpu nvidia gpu nvidia gpu nvidia gpu nvidia nvidia driver version cudnn version could not collect hip runtime version n a miopen runtime version n a is xnnpack available true versions of relevant libraries numpy torch torch model archiver torch workflow archiver torchaudio torchserve torchvision vit pytorch blas mkl conda forge blas devel mkl conda forge captum pytorch cudatoolkit conda forge libblas mkl conda forge libcblas mkl conda forge liblapack mkl conda forge liblapacke mkl conda forge magma pytorch mkl conda forge mkl devel conda forge mkl include conda forge numpy conda forge pytorch mutex cuda pytorch torch pypi pypi torch model archiver pytorch torch workflow archiver pytorch torchaudio pytorch torchserve pytorch torchvision pypi pypi vit pytorch pypi pypi cc ezyang gchanan mrshenli zhaojuanmao satgera rohan varma gqchen aazzolini osalpekar jiayisuse h huang awgu | 1 |
576,003 | 17,068,684,833 | IssuesEvent | 2021-07-07 10:31:49 | apache/airflow | https://api.github.com/repos/apache/airflow | closed | airflow is not accepting configuration from Admin UI | kind:bug priority:high | **Apache Airflow version**: 2.1.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NO
**Environment**: local (docker + ubuntu)
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): linux
**What happened**:
airflow is not accepting configuration from Admin UI
when I submit a DAG configuration from UI, it is giving me error `Invalid JSON configuration, must be a dict`

the same workflow is working in airflow 2.1.0 version
I think this issue occurred in this PR https://github.com/apache/airflow/pull/15057
**What you expected to happen**:
Airflow admin should accept the conf from UI and trigger the dag
**How to reproduce it**:
It can be reproduced for all cases, we have to put any valid JSON into the configuration window on a DAG
**Willing to submit PR**
YES
| 1.0 | airflow is not accepting configuration from Admin UI - **Apache Airflow version**: 2.1.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NO
**Environment**: local (docker + ubuntu)
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): linux
**What happened**:
airflow is not accepting configuration from Admin UI
when I submit a DAG configuration from UI, it is giving me error `Invalid JSON configuration, must be a dict`

the same workflow is working in airflow 2.1.0 version
I think this issue occurred in this PR https://github.com/apache/airflow/pull/15057
**What you expected to happen**:
Airflow admin should accept the conf from UI and trigger the dag
**How to reproduce it**:
It can be reproduced for all cases, we have to put any valid JSON into the configuration window on a DAG
**Willing to submit PR**
YES
| priority | airflow is not accepting configuration from admin ui apache airflow version kubernetes version if you are using kubernetes use kubectl version no environment local docker ubuntu cloud provider or hardware configuration os e g from etc os release debian gnu linux buster kernel e g uname a linux what happened airflow is not accepting configuration from admin ui when i submit a dag configuration from ui it is giving me error invalid json configuration must be a dict the same workflow is working in airflow version i think this issue occurred in this pr what you expected to happen airflow admin should accept the conf from ui and trigger the dag how to reproduce it it can be reproduced for all cases we have to put any valid json into the configuration window on a dag willing to submit pr yes | 1 |
720,678 | 24,801,234,739 | IssuesEvent | 2022-10-24 21:58:41 | umple/umple | https://api.github.com/repos/umple/umple | opened | The --path option does not work when generating Cpp files from umple | bug Usability Priority-High cpp | When I run umple:
`
java -jar umple.jar .\test.ump -g Cpp --path ./Cpp
`
It generates files for Cpp in current path, instead of creating a new directory.
This happens when the language is Cpp. Is expected to geneated files in target directory.
Also the option string needs to be exact "Cpp" for generating c++ files. It is better to make it case insensitive.
| 1.0 | The --path option does not work when generating Cpp files from umple - When I run umple:
`
java -jar umple.jar .\test.ump -g Cpp --path ./Cpp
`
It generates files for Cpp in current path, instead of creating a new directory.
This happens when the language is Cpp. Is expected to geneated files in target directory.
Also the option string needs to be exact "Cpp" for generating c++ files. It is better to make it case insensitive.
| priority | the path option does not work when generating cpp files from umple when i run umple java jar umple jar test ump g cpp path cpp it generates files for cpp in current path instead of creating a new directory this happens when the language is cpp is expected to geneated files in target directory also the option string needs to be exact cpp for generating c files it is better to make it case insensitive | 1 |
419,730 | 12,227,670,755 | IssuesEvent | 2020-05-03 16:11:53 | ppy/osu-framework | https://api.github.com/repos/ppy/osu-framework | closed | InvalidOperationException when pressing a key in a dropdown menu when settings SearchTextBox isn't empty | area:UI high priority | **Describe the bug:**
- Open the settings panel
- Search for an option with a dropdown using the `SearchTextBox`
- Open the dropdown
- Press any key, the game will throw an `InvalidOperationException`
**Screenshots or videos showing encountered issue:** [Video](https://drive.google.com/file/d/1cNvBwZNV-J8KYeGqjC8jmm2SpsHp7Gkq/view?usp=sharing)
**Logs:** [runtime.log](https://github.com/ppy/osu/files/4558015/runtime.log) | 1.0 | InvalidOperationException when pressing a key in a dropdown menu when settings SearchTextBox isn't empty - **Describe the bug:**
- Open the settings panel
- Search for an option with a dropdown using the `SearchTextBox`
- Open the dropdown
- Press any key, the game will throw an `InvalidOperationException`
**Screenshots or videos showing encountered issue:** [Video](https://drive.google.com/file/d/1cNvBwZNV-J8KYeGqjC8jmm2SpsHp7Gkq/view?usp=sharing)
**Logs:** [runtime.log](https://github.com/ppy/osu/files/4558015/runtime.log) | priority | invalidoperationexception when pressing a key in a dropdown menu when settings searchtextbox isn t empty describe the bug open the settings panel search for an option with a dropdown using the searchtextbox open the dropdown press any key the game will throw an invalidoperationexception screenshots or videos showing encountered issue logs | 1 |
365,103 | 10,775,635,689 | IssuesEvent | 2019-11-03 15:36:03 | AY1920S1-CS2103T-W12-2/main | https://api.github.com/repos/AY1920S1-CS2103T-W12-2/main | closed | Needs to update UG for command summary | priority.High | command summary is not correct. For example, the clone should have another optional field.


<hr><sub>[original: woon17/ped#8]<br/>
</sub> | 1.0 | Needs to update UG for command summary - command summary is not correct. For example, the clone should have another optional field.


<hr><sub>[original: woon17/ped#8]<br/>
</sub> | priority | needs to update ug for command summary command summary is not correct for example the clone should have another optional field | 1 |
649,077 | 21,217,372,746 | IssuesEvent | 2022-04-11 08:40:49 | microsoftgraph/microsoft-graph-explorer-v4 | https://api.github.com/repos/microsoftgraph/microsoft-graph-explorer-v4 | opened | [Header Area] Add sign in button | Priority: High Header area | As a user,
When I’m on the header area and I’m not signed in
I would like to see a sign in button
**Acceptance Criteria**
- Responsive design
- Accessibility
- The sign in functionality on the button is the same as the current sign in experience
**Design**
Please refer to figma file for design requirements
| 1.0 | [Header Area] Add sign in button - As a user,
When I’m on the header area and I’m not signed in
I would like to see a sign in button
**Acceptance Criteria**
- Responsive design
- Accessibility
- The sign in functionality on the button is the same as the current sign in experience
**Design**
Please refer to figma file for design requirements
| priority | add sign in button as a user when i’m on the header area and i’m not signed in i would like to see a sign in button acceptance criteria responsive design accessibility the sign in functionality on the button is the same as the current sign in experience design please refer to figma file for design requirements | 1 |
294,089 | 9,013,218,059 | IssuesEvent | 2019-02-05 18:54:51 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | ldap: Add support for syncing custom profile fields from LDAP | area: authentication area: settings (user) in progress priority: high | This shouldn't be too hard to develop and test; basically, we'd want to:
* For, review https://zulip.readthedocs.io/en/latest/subsystems/auth.html#testing-ldap-in-development for how to do LDAP testing in development.
* Update our `generate_dev_ldap_dir` function's mock LDAP directories to include values for some custom profile data fields (e.g. birthday would be a good choice).
* Add support for `AUTH_LDAP_USER_ATTR_MAP` containing CustomProfileFields, maybe spelled as `custom_profile_field__birthday: "birthday",`. To make this work, we'd need to add a `_populate_user` method in ZulipLDAPBackendBase in `zproject/backends.py` (overriding the default from `django-auth-ldap` (`/srv/zulip-py3-venv/lib/python3.5/site-packages/django_auth_ldap/backend.py`), though the it should call `super()` to get the built-in code to run). That should ensure that both `manage.py sync_ldap_data` and user creation sync the fields over properly.
* Document this is `docs/production/authentication-methods.md`.
Once this is done, two follow-ups are relevant:
* [x] We should do https://github.com/zulip/zulip/issues/286 as well, since that'll involve the same technique (but with a bit more work since we need to deal with file uploads, so we should do this first).
* We should add options to control whether users can manually edit these custom profile fields (or perhaps just make them non-editable, since I think with this use case, that's almost certainly what you want) | 1.0 | ldap: Add support for syncing custom profile fields from LDAP - This shouldn't be too hard to develop and test; basically, we'd want to:
* For, review https://zulip.readthedocs.io/en/latest/subsystems/auth.html#testing-ldap-in-development for how to do LDAP testing in development.
* Update our `generate_dev_ldap_dir` function's mock LDAP directories to include values for some custom profile data fields (e.g. birthday would be a good choice).
* Add support for `AUTH_LDAP_USER_ATTR_MAP` containing CustomProfileFields, maybe spelled as `custom_profile_field__birthday: "birthday",`. To make this work, we'd need to add a `_populate_user` method in ZulipLDAPBackendBase in `zproject/backends.py` (overriding the default from `django-auth-ldap` (`/srv/zulip-py3-venv/lib/python3.5/site-packages/django_auth_ldap/backend.py`), though the it should call `super()` to get the built-in code to run). That should ensure that both `manage.py sync_ldap_data` and user creation sync the fields over properly.
* Document this is `docs/production/authentication-methods.md`.
Once this is done, two follow-ups are relevant:
* [x] We should do https://github.com/zulip/zulip/issues/286 as well, since that'll involve the same technique (but with a bit more work since we need to deal with file uploads, so we should do this first).
* We should add options to control whether users can manually edit these custom profile fields (or perhaps just make them non-editable, since I think with this use case, that's almost certainly what you want) | priority | ldap add support for syncing custom profile fields from ldap this shouldn t be too hard to develop and test basically we d want to for review for how to do ldap testing in development update our generate dev ldap dir function s mock ldap directories to include values for some custom profile data fields e g birthday would be a good choice add support for auth ldap user attr map containing customprofilefields maybe spelled as custom profile field birthday birthday to make this work we d need to add a populate user method in zulipldapbackendbase in zproject backends py overriding the default from django auth ldap srv zulip venv lib site packages django auth ldap backend py though the it should call super to get the built in code to run that should ensure that both manage py sync ldap data and user creation sync the fields over properly document this is docs production authentication methods md once this is done two follow ups are relevant we should do as well since that ll involve the same technique but with a bit more work since we need to deal with file uploads so we should do this first we should add options to control whether users can manually edit these custom profile fields or perhaps just make them non editable since i think with this use case that s almost certainly what you want | 1 |
366,538 | 10,822,076,355 | IssuesEvent | 2019-11-08 20:16:59 | rstudio/gt | https://api.github.com/repos/rstudio/gt | closed | Using stub data for conditional styling | Difficulty: ③ Advanced Effort: ② Medium Priority: ③ High Type: ★ Enhancement | Is it possible to use the stub data in tab_style? For example, I'm using a "year" column (renamed to rowname for purposes of gt) and want to bold all rows where the year is 201x, but it looks like that column gets thrown out of the available data in the course of building the pretty table.
```
tab %>%
rename(rowname = year) %>%
gt() %>%
tab_style(
style = cells_styles(
text_weight = "bold"),
locations = cells_data(
rows = ???? == 2019)
)
``` | 1.0 | Using stub data for conditional styling - Is it possible to use the stub data in tab_style? For example, I'm using a "year" column (renamed to rowname for purposes of gt) and want to bold all rows where the year is 201x, but it looks like that column gets thrown out of the available data in the course of building the pretty table.
```
tab %>%
rename(rowname = year) %>%
gt() %>%
tab_style(
style = cells_styles(
text_weight = "bold"),
locations = cells_data(
rows = ???? == 2019)
)
``` | priority | using stub data for conditional styling is it possible to use the stub data in tab style for example i m using a year column renamed to rowname for purposes of gt and want to bold all rows where the year is but it looks like that column gets thrown out of the available data in the course of building the pretty table tab rename rowname year gt tab style style cells styles text weight bold locations cells data rows | 1 |
373,047 | 11,032,382,009 | IssuesEvent | 2019-12-06 20:02:27 | alect47/Playlist | https://api.github.com/repos/alect47/Playlist | closed | DELETE /api/v1/playlist/:id | High Priority | As a user, I can delete a playlist from my playlists
```
DELETE /api/v1/playlists/:id
```
- [x] Delete the playlist with the id passed in
- [x] return 204 status code
- [x] If favorite not found return 404 | 1.0 | DELETE /api/v1/playlist/:id - As a user, I can delete a playlist from my playlists
```
DELETE /api/v1/playlists/:id
```
- [x] Delete the playlist with the id passed in
- [x] return 204 status code
- [x] If favorite not found return 404 | priority | delete api playlist id as a user i can delete a playlist from my playlists delete api playlists id delete the playlist with the id passed in return status code if favorite not found return | 1 |
752,094 | 26,273,046,482 | IssuesEvent | 2023-01-06 18:57:32 | woocommerce/woocommerce | https://api.github.com/repos/woocommerce/woocommerce | opened | [Core] Look for and resolve unhandled return types | priority: high type: task plugin: woocommerce | Ongoing task to find and introduce handling for cases where we do not handle all possible return values. Example:
```php
# wc_get_product() may return a WC_Product, null or bool:
$product = wc_get_product( $some_id );
# Assuming it will be a WC_Product without checking is dangerous:
print $product->get_id();
```
- Internal conversation: p1673025010514109-slack-C0E1AV8T0
- Recent examples: [[1]](https://github.com/woocommerce/woocommerce/issues/35903) [[2]](https://github.com/woocommerce/woocommerce/issues/35916)
Outcome of this PR should be multiple PRs addressing the issues we discover. | 1.0 | [Core] Look for and resolve unhandled return types - Ongoing task to find and introduce handling for cases where we do not handle all possible return values. Example:
```php
# wc_get_product() may return a WC_Product, null or bool:
$product = wc_get_product( $some_id );
# Assuming it will be a WC_Product without checking is dangerous:
print $product->get_id();
```
- Internal conversation: p1673025010514109-slack-C0E1AV8T0
- Recent examples: [[1]](https://github.com/woocommerce/woocommerce/issues/35903) [[2]](https://github.com/woocommerce/woocommerce/issues/35916)
Outcome of this PR should be multiple PRs addressing the issues we discover. | priority | look for and resolve unhandled return types ongoing task to find and introduce handling for cases where we do not handle all possible return values example php wc get product may return a wc product null or bool product wc get product some id assuming it will be a wc product without checking is dangerous print product get id internal conversation slack recent examples outcome of this pr should be multiple prs addressing the issues we discover | 1 |
268,080 | 8,403,109,377 | IssuesEvent | 2018-10-11 08:52:25 | AGROFIMS/hagrofims | https://api.github.com/repos/AGROFIMS/hagrofims | opened | Change how area harvested is measured | enhancement experimental conditions high priority measurement | Area harvested to be depicted as: Number of rows x length harvest; length x width harvested | 1.0 | Change how area harvested is measured - Area harvested to be depicted as: Number of rows x length harvest; length x width harvested | priority | change how area harvested is measured area harvested to be depicted as number of rows x length harvest length x width harvested | 1 |
443,760 | 12,799,196,446 | IssuesEvent | 2020-07-02 15:02:04 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | web admin: using Switch Groups in Nodes menu take a while | Priority: High Type: Bug | **Describe the bug**
Since 54c8ca3ba7c47b949e7e49e75c49db8b869bd959, web admin displays all switch groups in Nodes menu (in place of 25 in the past).
I noticed that API call (`/api/v1/config/switch_groups?limit=1000`) can take a while because we request all the configuration elements for switch groups but we don't really use it at this moment.
On top of that, once we got all switch groups, we do a new query to get members of all switch groups (`/api/v1/config/switch_group/default/members`). Again, we got all configuration information for each switch which is not necessary at this point.
I could be wrong but from my point of view, we only need to get:
- switch groups names
- switch groups members with their description and IP address
**Expected behavior**
Only request needed informations on switch and switch groups to speed up loading of Nodes menu.
| 1.0 | web admin: using Switch Groups in Nodes menu take a while - **Describe the bug**
Since 54c8ca3ba7c47b949e7e49e75c49db8b869bd959, web admin displays all switch groups in Nodes menu (in place of 25 in the past).
I noticed that API call (`/api/v1/config/switch_groups?limit=1000`) can take a while because we request all the configuration elements for switch groups but we don't really use it at this moment.
On top of that, once we got all switch groups, we do a new query to get members of all switch groups (`/api/v1/config/switch_group/default/members`). Again, we got all configuration information for each switch which is not necessary at this point.
I could be wrong but from my point of view, we only need to get:
- switch groups names
- switch groups members with their description and IP address
**Expected behavior**
Only request needed informations on switch and switch groups to speed up loading of Nodes menu.
| priority | web admin using switch groups in nodes menu take a while describe the bug since web admin displays all switch groups in nodes menu in place of in the past i noticed that api call api config switch groups limit can take a while because we request all the configuration elements for switch groups but we don t really use it at this moment on top of that once we got all switch groups we do a new query to get members of all switch groups api config switch group default members again we got all configuration information for each switch which is not necessary at this point i could be wrong but from my point of view we only need to get switch groups names switch groups members with their description and ip address expected behavior only request needed informations on switch and switch groups to speed up loading of nodes menu | 1 |
98,842 | 4,031,948,669 | IssuesEvent | 2016-05-18 18:54:11 | raml-org/raml-js-parser-2 | https://api.github.com/repos/raml-org/raml-js-parser-2 | closed | previous parser did not have problems with empty values | bug priority:high raml-0.8 | The previous parser seem to have no problem when you did not define a value with a key. For example:
```yaml
description: Collection of available <<resourcePathName>> in Hybrid.
get:
description: Get a list of <<resourcePathName>>.
responses:
200:
body:
application/json:
example: |
{ "data": [
<<exampleItem>>
]
}
post:
body:
application/json:
schema: # here is the problem
responses:
201:
body:
application/json:
example: |
{ "data": <<exampleItem>>
}
```
Thats a resource type definition and the parser complains that the schema is empty. That was fully valid with the previous. | 1.0 | previous parser did not have problems with empty values - The previous parser seem to have no problem when you did not define a value with a key. For example:
```yaml
description: Collection of available <<resourcePathName>> in Hybrid.
get:
description: Get a list of <<resourcePathName>>.
responses:
200:
body:
application/json:
example: |
{ "data": [
<<exampleItem>>
]
}
post:
body:
application/json:
schema: # here is the problem
responses:
201:
body:
application/json:
example: |
{ "data": <<exampleItem>>
}
```
Thats a resource type definition and the parser complains that the schema is empty. That was fully valid with the previous. | priority | previous parser did not have problems with empty values the previous parser seem to have no problem when you did not define a value with a key for example yaml description collection of available in hybrid get description get a list of responses body application json example data post body application json schema here is the problem responses body application json example data thats a resource type definition and the parser complains that the schema is empty that was fully valid with the previous | 1 |
179,622 | 6,627,400,096 | IssuesEvent | 2017-09-23 02:07:31 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | AWX doesn't show jobs output in ad-hoc commands | component:api priority:high state:needs_info type:bug | ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- UI
##### SUMMARY
When you launch a job it doesn't show the output.
##### ENVIRONMENT
* AWX version: 1.0.0.532
* Ansible version: 2.4.0.0
* Operating System: RHEL7
* Web Browser: chrome / firefox/ IE
##### STEPS TO REPRODUCE
Simple run a Job:
Add a host, run ping command on host created.
##### EXPECTED RESULTS
See output from the job.
##### ACTUAL RESULTS
Nothing happens on output.
It stuck on "Waiting for results..." and the status is "Waiting"
If I browse to JOBS, I can see the outputs as well
##### ADDITIONAL INFORMATION
I'm running AWX on docker locally, see docker output bellow:
```
[root@docker01 ansible-awx]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ad77e547f1a awx_task:1.0.0.532 "/tini -- /bin/sh ..." 20 minutes ago Up 20 minutes 8052/tcp awx_task
cc8a0efe9a92 awx_web:1.0.0.532 "/tini -- /bin/sh ..." 20 minutes ago Up 20 minutes 0.0.0.0:80->8052/tcp awx_web
413ebefcbb5e memcached:alpine "docker-entrypoint..." 20 minutes ago Up 20 minutes 11211/tcp memcached
878022ae8e55 rabbitmq:3 "docker-entrypoint..." 20 minutes ago Up 20 minutes 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
3170d5795036 postgres:9.6 "docker-entrypoint..." 20 minutes ago Up 20 minutes 5432/tcp postgres
```
| 1.0 | AWX doesn't show jobs output in ad-hoc commands - ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- UI
##### SUMMARY
When you launch a job it doesn't show the output.
##### ENVIRONMENT
* AWX version: 1.0.0.532
* Ansible version: 2.4.0.0
* Operating System: RHEL7
* Web Browser: chrome / firefox/ IE
##### STEPS TO REPRODUCE
Simple run a Job:
Add a host, run ping command on host created.
##### EXPECTED RESULTS
See output from the job.
##### ACTUAL RESULTS
Nothing happens on output.
It stuck on "Waiting for results..." and the status is "Waiting"
If I browse to JOBS, I can see the outputs as well
##### ADDITIONAL INFORMATION
I'm running AWX on docker locally, see docker output bellow:
```
[root@docker01 ansible-awx]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ad77e547f1a awx_task:1.0.0.532 "/tini -- /bin/sh ..." 20 minutes ago Up 20 minutes 8052/tcp awx_task
cc8a0efe9a92 awx_web:1.0.0.532 "/tini -- /bin/sh ..." 20 minutes ago Up 20 minutes 0.0.0.0:80->8052/tcp awx_web
413ebefcbb5e memcached:alpine "docker-entrypoint..." 20 minutes ago Up 20 minutes 11211/tcp memcached
878022ae8e55 rabbitmq:3 "docker-entrypoint..." 20 minutes ago Up 20 minutes 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
3170d5795036 postgres:9.6 "docker-entrypoint..." 20 minutes ago Up 20 minutes 5432/tcp postgres
```
| priority | awx doesn t show jobs output in ad hoc commands issue type bug report component name ui summary when you launch a job it doesn t show the output environment awx version ansible version operating system web browser chrome firefox ie steps to reproduce simple run a job add a host run ping command on host created expected results see output from the job actual results nothing happens on output it stuck on waiting for results and the status is waiting if i browse to jobs i can see the outputs as well additional information i m running awx on docker locally see docker output bellow docker ps container id image command created status ports names awx task tini bin sh minutes ago up minutes tcp awx task awx web tini bin sh minutes ago up minutes tcp awx web memcached alpine docker entrypoint minutes ago up minutes tcp memcached rabbitmq docker entrypoint minutes ago up minutes tcp tcp tcp rabbitmq postgres docker entrypoint minutes ago up minutes tcp postgres | 1 |
234,164 | 7,717,520,929 | IssuesEvent | 2018-05-23 13:57:29 | ODIQueensland/data-curator | https://api.github.com/repos/ODIQueensland/data-curator | closed | Find menu item order and shortcuts | est:Moderate fn:Find-Replace fn:Interface priority:High problem:Bug | ### Current Behaviour (for problems)
<img width="649" alt="screenshot 2018-05-10 21 34 53" src="https://user-images.githubusercontent.com/9379524/39867871-59539e54-549a-11e8-8971-0011adac5f3c.png">
### Expected Behaviour
> When reporting a problem, describe what should happen
- correct find previous shortcut
- fix replace next / previous order and shortcuts
<img width="207" alt="screenshot 2018-05-10 21 37 47" src="https://user-images.githubusercontent.com/9379524/39867895-7991b49e-549a-11e8-94ec-0e2f172bbdd5.png">
@mattRedBox I implemented this in https://github.com/ODIQueensland/data-curator/commit/0a1e908768af06abbabd063c5d810d47ce006a1d and fixed an error in https://github.com/ODIQueensland/data-curator/commit/f32b48f8bf3ab75ad36567d38e1304068e0646de but seem to have introduced a problem
Now find previous seems to invoke replace previous
Feel free to rollback and ignore my attempt at what I thought was a simple change - sorry
### Your Environment
> Include details about the environment you experienced the problem - this will help us fix the bug quicker.
* Data Curator version: 0.16.0 Develop branch
* Operating System and version: macOS High Sierra 10.13.4 | 1.0 | Find menu item order and shortcuts - ### Current Behaviour (for problems)
<img width="649" alt="screenshot 2018-05-10 21 34 53" src="https://user-images.githubusercontent.com/9379524/39867871-59539e54-549a-11e8-8971-0011adac5f3c.png">
### Expected Behaviour
> When reporting a problem, describe what should happen
- correct find previous shortcut
- fix replace next / previous order and shortcuts
<img width="207" alt="screenshot 2018-05-10 21 37 47" src="https://user-images.githubusercontent.com/9379524/39867895-7991b49e-549a-11e8-94ec-0e2f172bbdd5.png">
@mattRedBox I implemented this in https://github.com/ODIQueensland/data-curator/commit/0a1e908768af06abbabd063c5d810d47ce006a1d and fixed an error in https://github.com/ODIQueensland/data-curator/commit/f32b48f8bf3ab75ad36567d38e1304068e0646de but seem to have introduced a problem
Now find previous seems to invoke replace previous
Feel free to rollback and ignore my attempt at what I thought was a simple change - sorry
### Your Environment
> Include details about the environment you experienced the problem - this will help us fix the bug quicker.
* Data Curator version: 0.16.0 Develop branch
* Operating System and version: macOS High Sierra 10.13.4 | priority | find menu item order and shortcuts current behaviour for problems img width alt screenshot src expected behaviour when reporting a problem describe what should happen correct find previous shortcut fix replace next previous order and shortcuts img width alt screenshot src mattredbox i implemented this in and fixed an error in but seem to have introduced a problem now find previous seems to invoke replace previous feel free to rollback and ignore my attempt at what i thought was a simple change sorry your environment include details about the environment you experienced the problem this will help us fix the bug quicker data curator version develop branch operating system and version macos high sierra | 1 |
51,829 | 3,014,484,340 | IssuesEvent | 2015-07-29 15:00:45 | patrickomni/omnimobileserver | https://api.github.com/repos/patrickomni/omnimobileserver | closed | Alert - row entered in table each time a fuel device reading is below threshold | bug Priority HIGH | Frank configured a fuel device sensor registered in the Test Tanks account to report every 10 minutes and set a device level rule to create an alert if the level is below 50%.
Expected: a single alert in Active Alerts table along w/ a single SMS msg sent to all configured users (one of whom is Frank)
Actual: an entry in Active Alert table for the device every 10 minutes, many text messages sent to Frank
 | 1.0 | Alert - row entered in table each time a fuel device reading is below threshold - Frank configured a fuel device sensor registered in the Test Tanks account to report every 10 minutes and set a device level rule to create an alert if the level is below 50%.
Expected: a single alert in Active Alerts table along w/ a single SMS msg sent to all configured users (one of whom is Frank)
Actual: an entry in Active Alert table for the device every 10 minutes, many text messages sent to Frank
 | priority | alert row entered in table each time a fuel device reading is below threshold frank configured a fuel device sensor registered in the test tanks account to report every minutes and set a device level rule to create an alert if the level is below expected a single alert in active alerts table along w a single sms msg sent to all configured users one of whom is frank actual an entry in active alert table for the device every minutes many text messages sent to frank | 1 |
268,216 | 8,404,983,714 | IssuesEvent | 2018-10-11 14:13:08 | highcharts/highcharts | https://api.github.com/repos/highcharts/highcharts | closed | IE8 / jQuery UI 1.8 - Mouse moves not working | Highcharts Priority:Low | As noted in this thread: http://highslide.com/forum/viewtopic.php?f=9&t=10912&p=51929#p51929
In IE8 charts inside of jquery dialogs with jqueryui css 1.8+ won't allow zooming. If the css is 1.7 the zooming works. If ie8 is in compatibility mode it works. I believe ie9 has the same issue. Jquery ui css 1.8 and 1.7 work in both chrome and firefox.
Here is an example of jquery ui css 1.7. All browser I've tested can zoom. http://jsfiddle.net/UkpZ8/17/
Here is an example of jquery ui css 1.8. Won't zoom in ie8. http://jsfiddle.net/UkpZ8/26
| 1.0 | IE8 / jQuery UI 1.8 - Mouse moves not working - As noted in this thread: http://highslide.com/forum/viewtopic.php?f=9&t=10912&p=51929#p51929
In IE8 charts inside of jquery dialogs with jqueryui css 1.8+ won't allow zooming. If the css is 1.7 the zooming works. If ie8 is in compatibility mode it works. I believe ie9 has the same issue. Jquery ui css 1.8 and 1.7 work in both chrome and firefox.
Here is an example of jquery ui css 1.7. All browser I've tested can zoom. http://jsfiddle.net/UkpZ8/17/
Here is an example of jquery ui css 1.8. Won't zoom in ie8. http://jsfiddle.net/UkpZ8/26
| priority | jquery ui mouse moves not working as noted in this thread in charts inside of jquery dialogs with jqueryui css won t allow zooming if the css is the zooming works if is in compatibility mode it works i believe has the same issue jquery ui css and work in both chrome and firefox here is an example of jquery ui css all browser i ve tested can zoom here is an example of jquery ui css won t zoom in | 1 |
247,151 | 7,904,063,767 | IssuesEvent | 2018-07-02 01:44:17 | xcat2/xcat2-task-management | https://api.github.com/repos/xcat2/xcat2-task-management | closed | [HA]add --dryrun for xcatha.py | priority:high sprint1 | * [x] mini-design of option `--dryrun`
* [x] implement code
This request is from Victor, we think it is valuable. Before xcatha.py do the real work, it can dryrun, then user can know if the process and configuration are what he want.
The details:
Have a --dryrun option and then also have some tags that we can get a single line output of what is being done...
``xcatha.py --dryrun`` .... | grep INFO but seems like this generates too many messages
For example:
```
2018-05-30 10:55:25,367 - INFO - Running cp -f /tmp/ha_mn /etc/xcat/ha_mn
2018-05-30 10:55:25,404 - INFO - cp -f /tmp/ha_mn /etc/xcat/ha_mn
```
Seems like one is INFO, but another should be DEBUG ...
If we can reduce the output so that grep INFO will essentially give us a high level "steps", it would be beneficial to be more "self documenting" | 1.0 | [HA]add --dryrun for xcatha.py - * [x] mini-design of option `--dryrun`
* [x] implement code
This request is from Victor, we think it is valuable. Before xcatha.py do the real work, it can dryrun, then user can know if the process and configuration are what he want.
The details:
Have a --dryrun option and then also have some tags that we can get a single line output of what is being done...
``xcatha.py --dryrun`` .... | grep INFO but seems like this generates too many messages
For example:
```
2018-05-30 10:55:25,367 - INFO - Running cp -f /tmp/ha_mn /etc/xcat/ha_mn
2018-05-30 10:55:25,404 - INFO - cp -f /tmp/ha_mn /etc/xcat/ha_mn
```
Seems like one is INFO, but another should be DEBUG ...
If we can reduce the output so that grep INFO will essentially give us a high level "steps", it would be beneficial to be more "self documenting" | priority | add dryrun for xcatha py mini design of option dryrun implement code this request is from victor we think it is valuable before xcatha py do the real work it can dryrun then user can know if the process and configuration are what he want the details have a dryrun option and then also have some tags that we can get a single line output of what is being done xcatha py dryrun grep info but seems like this generates too many messages for example info running cp f tmp ha mn etc xcat ha mn info cp f tmp ha mn etc xcat ha mn seems like one is info but another should be debug if we can reduce the output so that grep info will essentially give us a high level steps it would be beneficial to be more self documenting | 1 |
390,396 | 11,543,166,484 | IssuesEvent | 2020-02-18 09:05:55 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Creates ledger issues if quantities on stock take lines get set to go infinite and finalise it | Docs: not needed Effort: small Module: dispensary Priority: high | ## Describe the bug
Mobile allows stock taking.
Doing the stocktake against the associated lines > When setting the quantity to go infinite against that line having stock > It creates the ledger problem on desktop against that item on that store.
### To reproduce
Steps to reproduce the behavior:
1. Create a stock take line against the current stock for one item
2.View that lines and on quantity field set the quantity until you get to see `infinite`
3. Finalize that stock take
4. See error
### Expected behaviour
No ledger issues should be created
### Proposed Solution
Am not sure but maybe don't let it go to `infinite`
### Version and device info
- App version: v4.0.0 RC10
- Tablet model: Samsung
- OS version: 4.4.4
### Additional context
Shared this to @joshxg via Telegram
Leaving this to @joshxg to put his thought.
| 1.0 | Creates ledger issues if quantities on stock take lines get set to go infinite and finalise it - ## Describe the bug
Mobile allows stock taking.
Doing the stocktake against the associated lines > When setting the quantity to go infinite against that line having stock > It creates the ledger problem on desktop against that item on that store.
### To reproduce
Steps to reproduce the behavior:
1. Create a stock take line against the current stock for one item
2.View that lines and on quantity field set the quantity until you get to see `infinite`
3. Finalize that stock take
4. See error
### Expected behaviour
No ledger issues should be created
### Proposed Solution
Am not sure but maybe don't let it go to `infinite`
### Version and device info
- App version: v4.0.0 RC10
- Tablet model: Samsung
- OS version: 4.4.4
### Additional context
Shared this to @joshxg via Telegram
Leaving this to @joshxg to put his thought.
| priority | creates ledger issues if quantities on stock take lines get set to go infinite and finalise it describe the bug mobile allows stock taking doing the stocktake against the associated lines when setting the quantity to go infinite against that line having stock it creates the ledger problem on desktop against that item on that store to reproduce steps to reproduce the behavior create a stock take line against the current stock for one item view that lines and on quantity field set the quantity until you get to see infinite finalize that stock take see error expected behaviour no ledger issues should be created proposed solution am not sure but maybe don t let it go to infinite version and device info app version tablet model samsung os version additional context shared this to joshxg via telegram leaving this to joshxg to put his thought | 1 |
591,186 | 17,796,698,301 | IssuesEvent | 2021-08-31 23:37:17 | StatisticsNZ/simplevis | https://api.github.com/repos/StatisticsNZ/simplevis | opened | bar: x_var date labels are not working correctly | high priority | This probably affects line as well
```
setup_datalake_access()
no2_nzta <- er.helpers::read_from_datalake( "air/2021/tidy/no2_nzta.RDS")
sitecheck_data <- no2_nzta %>%
select(site, "value" = concentration, month, year) %>%
mutate(len = str_length(site)) %>%
mutate(temp_id = as.character(substring(site, 1,6))) %>%
group_by(temp_id) %>%
filter(any(str_length(site) > 6)) %>%
mutate(measurement_date = lubridate::my(paste0(month, year)) %>% lubridate::as_date()) %>%
mutate(site = as.character(site))
sitecheck_data %>%
filter(temp_id == "AUC004") %>%
simplevis::gg_bar_col(., x_var = measurement_date, y_var = value, col_var = site)
sitecheck_data %>%
filter(temp_id == "AUC004") %>%
ggplot(aes(x = measurement_date, y = value, fill = site)) +
geom_col()
```
| 1.0 | bar: x_var date labels are not working correctly - This probably affects line as well
```
setup_datalake_access()
no2_nzta <- er.helpers::read_from_datalake( "air/2021/tidy/no2_nzta.RDS")
sitecheck_data <- no2_nzta %>%
select(site, "value" = concentration, month, year) %>%
mutate(len = str_length(site)) %>%
mutate(temp_id = as.character(substring(site, 1,6))) %>%
group_by(temp_id) %>%
filter(any(str_length(site) > 6)) %>%
mutate(measurement_date = lubridate::my(paste0(month, year)) %>% lubridate::as_date()) %>%
mutate(site = as.character(site))
sitecheck_data %>%
filter(temp_id == "AUC004") %>%
simplevis::gg_bar_col(., x_var = measurement_date, y_var = value, col_var = site)
sitecheck_data %>%
filter(temp_id == "AUC004") %>%
ggplot(aes(x = measurement_date, y = value, fill = site)) +
geom_col()
```
| priority | bar x var date labels are not working correctly this probably affects line as well setup datalake access nzta er helpers read from datalake air tidy nzta rds sitecheck data select site value concentration month year mutate len str length site mutate temp id as character substring site group by temp id filter any str length site mutate measurement date lubridate my month year lubridate as date mutate site as character site sitecheck data filter temp id simplevis gg bar col x var measurement date y var value col var site sitecheck data filter temp id ggplot aes x measurement date y value fill site geom col | 1 |
440,935 | 12,706,430,033 | IssuesEvent | 2020-06-23 07:10:24 | wso2/devstudio-tooling-ei | https://api.github.com/repos/wso2/devstudio-tooling-ei | closed | Add support for new attributes for sequence template parameters | Priority/Highest Severity/Critical | **Description:**
With improvement [Add mandatory parameter support for Sequence Templates](https://github.com/wso2/micro-integrator/issues/1673) WSO2 EI runtime is introduced with two new attributes for template parameters.
WSO2 Integration Studio need to support these two new attributes. Currently they are not persisted or added to the CAR file.
**Related Issues:**
https://github.com/wso2/micro-integrator/issues/1673
Can fix together: https://github.com/wso2/devstudio-tooling-ei/issues/1091 | 1.0 | Add support for new attributes for sequence template parameters - **Description:**
With improvement [Add mandatory parameter support for Sequence Templates](https://github.com/wso2/micro-integrator/issues/1673) WSO2 EI runtime is introduced with two new attributes for template parameters.
WSO2 Integration Studio need to support these two new attributes. Currently they are not persisted or added to the CAR file.
**Related Issues:**
https://github.com/wso2/micro-integrator/issues/1673
Can fix together: https://github.com/wso2/devstudio-tooling-ei/issues/1091 | priority | add support for new attributes for sequence template parameters description with improvement ei runtime is introduced with two new attributes for template parameters integration studio need to support these two new attributes currently they are not persisted or added to the car file related issues can fix together | 1 |
309,840 | 9,480,631,045 | IssuesEvent | 2019-04-20 19:35:53 | varunvora/alcoding | https://api.github.com/repos/varunvora/alcoding | closed | Reduce unmapped handles | Priority: High Status: In Progress Type: Maintenance | The database has many more entries now. We should be able to reduce the unmapped handles considerably. | 1.0 | Reduce unmapped handles - The database has many more entries now. We should be able to reduce the unmapped handles considerably. | priority | reduce unmapped handles the database has many more entries now we should be able to reduce the unmapped handles considerably | 1 |
321,983 | 9,811,069,295 | IssuesEvent | 2019-06-12 22:14:56 | SIB-Colombia/Biodiversidad-en-cifras | https://api.github.com/repos/SIB-Colombia/Biodiversidad-en-cifras | closed | Incluir fecha de corte de los datos con los que se consolidan las cifras | Priority: High | Hacer visible la fecha de corte de los datos con los que se construyen las cifras. Incluir justo debajo de los botones de la sección "Sobre las cifras"
 | 1.0 | Incluir fecha de corte de los datos con los que se consolidan las cifras - Hacer visible la fecha de corte de los datos con los que se construyen las cifras. Incluir justo debajo de los botones de la sección "Sobre las cifras"
 | priority | incluir fecha de corte de los datos con los que se consolidan las cifras hacer visible la fecha de corte de los datos con los que se construyen las cifras incluir justo debajo de los botones de la sección sobre las cifras | 1 |
277,495 | 8,629,107,895 | IssuesEvent | 2018-11-21 19:33:40 | EScopeTeam/game-off-2018 | https://api.github.com/repos/EScopeTeam/game-off-2018 | opened | Message to play with the monster | Priority: High backend enhancement | Message to play with the monster through websocket given a monster ID. The endpoint should check that the monster is owned by the current user. | 1.0 | Message to play with the monster - Message to play with the monster through websocket given a monster ID. The endpoint should check that the monster is owned by the current user. | priority | message to play with the monster message to play with the monster through websocket given a monster id the endpoint should check that the monster is owned by the current user | 1 |
82,230 | 3,604,370,915 | IssuesEvent | 2016-02-03 22:36:19 | 18F/college-choice | https://api.github.com/repos/18F/college-choice | closed | 508 review | Area - Consumer Tool Area - Data Tool Bang 3 - High Bang For Buck 3 - High Buck 1 - Low Priority 1 Scrub (Sabrina) Stage 6 - Accept | An Accessibility Review is scheduled for tomorrow with the resident 18f accessibility expert. | 1.0 | 508 review - An Accessibility Review is scheduled for tomorrow with the resident 18f accessibility expert. | priority | review an accessibility review is scheduled for tomorrow with the resident accessibility expert | 1 |
300,735 | 9,212,148,056 | IssuesEvent | 2019-03-09 21:31:35 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Integrators invalidate the entire continuous state + propagate to kinematics values. | priority: high team: dynamics type: bug | See description in #10879. I'll paste here for completeness:
> It seems our integrators are broken also. They access (and invalidate) the entire continuous state of a diagram (see here and here for instance).
> Pre this PR, implicit Stribeck cache entries are declared dependent on kinematics_ticket, which essentially gets invalidated if the continuous state gets invalidated. As far as I understand, all systems have a "continuous state tracker" regardless of whether they have continuous state or not. Therefore, when an integrator says "mutate diagram continuous state", that has the side effect of invalidating the kinematics ticket.
> Note that the case we are investigating is that of a discrete state plant (no continuous state at all) however the invalidation triggered from the integrators propagated down the pipeline to something discrete.
| 1.0 | Integrators invalidate the entire continuous state + propagate to kinematics values. - See description in #10879. I'll paste here for completeness:
> It seems our integrators are broken also. They access (and invalidate) the entire continuous state of a diagram (see here and here for instance).
> Pre this PR, implicit Stribeck cache entries are declared dependent on kinematics_ticket, which essentially gets invalidated if the continuous state gets invalidated. As far as I understand, all systems have a "continuous state tracker" regardless of whether they have continuous state or not. Therefore, when an integrator says "mutate diagram continuous state", that has the side effect of invalidating the kinematics ticket.
> Note that the case we are investigating is that of a discrete state plant (no continuous state at all) however the invalidation triggered from the integrators propagated down the pipeline to something discrete.
| priority | integrators invalidate the entire continuous state propagate to kinematics values see description in i ll paste here for completeness it seems our integrators are broken also they access and invalidate the entire continuous state of a diagram see here and here for instance pre this pr implicit stribeck cache entries are declared dependent on kinematics ticket which essentially gets invalidated if the continuous state gets invalidated as far as i understand all systems have a continuous state tracker regardless of whether they have continuous state or not therefore when an integrator says mutate diagram continuous state that has the side effect of invalidating the kinematics ticket note that the case we are investigating is that of a discrete state plant no continuous state at all however the invalidation triggered from the integrators propagated down the pipeline to something discrete | 1 |
416,759 | 12,151,325,535 | IssuesEvent | 2020-04-24 19:43:55 | aseprite/aseprite | https://api.github.com/repos/aseprite/aseprite | closed | Absolute value option for the Hue/Saturation tool | enhancement high priority medium priority | In the beginning (unless I'm remembering wrong) this tool changed the values in an absolute (+10 meant +10 for all colors selected), but now it's a percentage of the value, which is very helpful to keep colors more in tune relative to each other, however I miss being able to change values absolutely as well.
Aseprite 1.2.9-x64
Windows 7 x64 | 2.0 | Absolute value option for the Hue/Saturation tool - In the beginning (unless I'm remembering wrong) this tool changed the values in an absolute (+10 meant +10 for all colors selected), but now it's a percentage of the value, which is very helpful to keep colors more in tune relative to each other, however I miss being able to change values absolutely as well.
Aseprite 1.2.9-x64
Windows 7 x64 | priority | absolute value option for the hue saturation tool in the beginning unless i m remembering wrong this tool changed the values in an absolute meant for all colors selected but now it s a percentage of the value which is very helpful to keep colors more in tune relative to each other however i miss being able to change values absolutely as well aseprite windows | 1 |
726,410 | 24,998,274,552 | IssuesEvent | 2022-11-03 04:11:13 | dtcenter/MET | https://api.github.com/repos/dtcenter/MET | closed | Add the Mean Absolute Difference (SPREAD_MD) to the ECNT line type | type: enhancement requestor: UK Met Office MET: Ensemble Verification priority: high | ## Describe the Enhancement ##
dtcenter/MET#2206 added the CRPS_EMP_FAIR statistic to the ECNT line type in MET-11.0.0-beta3. While doing that development, we overlooked [this comment](https://github.com/dtcenter/MET/issues/2206#issuecomment-1211753935) from @RogerHar.
This issue is to remedy that by adding the mean absolute difference statistic to the ECNT line type as an alternative measure of ensemble spread.
The development for dtcenter/MET#2206 already added the computation of the ensemble mean absolute difference. So this task is largely administrative, adding this statistics to the ECNT output line type and ensuring that Stat-Analysis can read/aggregate it over multiple cases.
Please see this reference:
Hopson, T. M. (2014). Assessing the Ensemble Spread–Error Relationship, _Monthly Weather Review,_ 142(3), 1125-1142. DOI: [10.1175/MWR-D-12-00111.1](https://doi.org/10.1175/MWR-D-12-00111.1)
### Time Estimate ###
4 hours.
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
None needed.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
UK MetOffice 2799991
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
Will update existing issues with this information:
- dtcenter/METdataio#131
- dtcenter/METviewer#434
- dtcenter/METcalcpy#229
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| 1.0 | Add the Mean Absolute Difference (SPREAD_MD) to the ECNT line type - ## Describe the Enhancement ##
dtcenter/MET#2206 added the CRPS_EMP_FAIR statistic to the ECNT line type in MET-11.0.0-beta3. While doing that development, we overlooked [this comment](https://github.com/dtcenter/MET/issues/2206#issuecomment-1211753935) from @RogerHar.
This issue is to remedy that by adding the mean absolute difference statistic to the ECNT line type as an alternative measure of ensemble spread.
The development for dtcenter/MET#2206 already added the computation of the ensemble mean absolute difference. So this task is largely administrative, adding this statistics to the ECNT output line type and ensuring that Stat-Analysis can read/aggregate it over multiple cases.
Please see this reference:
Hopson, T. M. (2014). Assessing the Ensemble Spread–Error Relationship, _Monthly Weather Review,_ 142(3), 1125-1142. DOI: [10.1175/MWR-D-12-00111.1](https://doi.org/10.1175/MWR-D-12-00111.1)
### Time Estimate ###
4 hours.
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
None needed.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
UK MetOffice 2799991
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
Will update existing issues with this information:
- dtcenter/METdataio#131
- dtcenter/METviewer#434
- dtcenter/METcalcpy#229
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| priority | add the mean absolute difference spread md to the ecnt line type describe the enhancement dtcenter met added the crps emp fair statistic to the ecnt line type in met while doing that development we overlooked from rogerhar this issue is to remedy that by adding the mean absolute difference statistic to the ecnt line type as an alternative measure of ensemble spread the development for dtcenter met already added the computation of the ensemble mean absolute difference so this task is largely administrative adding this statistics to the ecnt output line type and ensuring that stat analysis can read aggregate it over multiple cases please see this reference hopson t m assessing the ensemble spread–error relationship monthly weather review doi time estimate hours sub issues consider breaking the enhancement down into sub issues none needed relevant deadlines list relevant project deadlines here or state none funding source uk metoffice define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components will update existing issues with this information dtcenter metdataio dtcenter metviewer dtcenter metcalcpy enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue | 1 |
39,389 | 2,854,268,361 | IssuesEvent | 2015-06-01 23:09:08 | BlockServerProject/BlockServer | https://api.github.com/repos/BlockServerProject/BlockServer | closed | Client Ignores Chunk Packets | Bug High Priority Networking: Minecraft | The Client appears to seem to ignore chunk packets. After all packets are sent, the client keeps sending it's ping/pong as usual, but with different sequence numbers! For example, the last chunk packet's sequence number is 31, but the client responds with a 0x00 PING with a sequence number of 3! However, the client does ACK the chunk packets with the correct sequence numbers, but doesn't seem to stay with us. | 1.0 | Client Ignores Chunk Packets - The Client appears to seem to ignore chunk packets. After all packets are sent, the client keeps sending it's ping/pong as usual, but with different sequence numbers! For example, the last chunk packet's sequence number is 31, but the client responds with a 0x00 PING with a sequence number of 3! However, the client does ACK the chunk packets with the correct sequence numbers, but doesn't seem to stay with us. | priority | client ignores chunk packets the client appears to seem to ignore chunk packets after all packets are sent the client keeps sending it s ping pong as usual but with different sequence numbers for example the last chunk packet s sequence number is but the client responds with a ping with a sequence number of however the client does ack the chunk packets with the correct sequence numbers but doesn t seem to stay with us | 1 |
311,301 | 9,531,666,706 | IssuesEvent | 2019-04-29 16:33:34 | YCPjsteck/CS320-Sp19-Labyrinth-of-Tramateck | https://api.github.com/repos/YCPjsteck/CS320-Sp19-Labyrinth-of-Tramateck | closed | Real Database | HIGH PRIORITY development done | Ref: #9
Marked as MS3 as that is when it is required by, but preferably have it done by MS2. | 1.0 | Real Database - Ref: #9
Marked as MS3 as that is when it is required by, but preferably have it done by MS2. | priority | real database ref marked as as that is when it is required by but preferably have it done by | 1 |
322,600 | 9,820,040,240 | IssuesEvent | 2019-06-14 00:35:15 | andrewvt/HPTS | https://api.github.com/repos/andrewvt/HPTS | closed | Small Changes | Awaiting Review High Priority enhancement | Policy Edit/add
- [x] Move Policy number field down to between model policy and Bill sponsor
Heat map
- [x] Make heat more evident
| 1.0 | Small Changes - Policy Edit/add
- [x] Move Policy number field down to between model policy and Bill sponsor
Heat map
- [x] Make heat more evident
| priority | small changes policy edit add move policy number field down to between model policy and bill sponsor heat map make heat more evident | 1 |
659,437 | 21,927,847,177 | IssuesEvent | 2022-05-23 06:59:04 | papaya-insurtech/mango-bug-report | https://api.github.com/repos/papaya-insurtech/mango-bug-report | closed | UAT/ UI hoa hồng: thoát app sau đó vào lại báo lỗi tại thẻ hoa hồng | bug fixed priority/high f/mobile | UAT/ UI hoa hồng: thoát app sau đó vào lại báo lỗi tại thẻ hoa hồng

| 1.0 | UAT/ UI hoa hồng: thoát app sau đó vào lại báo lỗi tại thẻ hoa hồng - UAT/ UI hoa hồng: thoát app sau đó vào lại báo lỗi tại thẻ hoa hồng

| priority | uat ui hoa hồng thoát app sau đó vào lại báo lỗi tại thẻ hoa hồng uat ui hoa hồng thoát app sau đó vào lại báo lỗi tại thẻ hoa hồng | 1 |
354,231 | 10,564,241,488 | IssuesEvent | 2019-10-05 00:18:45 | OpenLiberty/ci.maven | https://api.github.com/repos/OpenLiberty/ci.maven | closed | NPE with liberty.bootstrap property used in the pom | bug high priority | mvn liberty:create
```
[ERROR] Failed to execute goal io.openliberty.tools:liberty-maven-plugin:3.0.2-SNAPSHOT:create (default-cli) on project guide-getting-started: null: MojoExecutionException: NullPointerException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.openliberty.tools:liberty-maven-plugin:3.0.2-SNAPSHOT:create (default-cli) on project guide-getting-started: null
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.apache.maven.plugin.MojoExecutionException
at org.codehaus.mojo.pluginsupport.MojoSupport.execute (MojoSupport.java:137)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: java.lang.NullPointerException
at java.util.ArrayList.addAll (ArrayList.java:581)
at io.openliberty.tools.maven.server.StartDebugMojoSupport.writeJvmOptions (StartDebugMojoSupport.java:378)
at io.openliberty.tools.maven.server.StartDebugMojoSupport.copyConfigFiles (StartDebugMojoSupport.java:207)
at io.openliberty.tools.maven.server.CreateServerMojo.doExecute (CreateServerMojo.java:86)
at org.codehaus.mojo.pluginsupport.MojoSupport.execute (MojoSupport.java:122)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[ERROR]
```
| 1.0 | NPE with liberty.bootstrap property used in the pom - mvn liberty:create
```
[ERROR] Failed to execute goal io.openliberty.tools:liberty-maven-plugin:3.0.2-SNAPSHOT:create (default-cli) on project guide-getting-started: null: MojoExecutionException: NullPointerException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.openliberty.tools:liberty-maven-plugin:3.0.2-SNAPSHOT:create (default-cli) on project guide-getting-started: null
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.apache.maven.plugin.MojoExecutionException
at org.codehaus.mojo.pluginsupport.MojoSupport.execute (MojoSupport.java:137)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: java.lang.NullPointerException
at java.util.ArrayList.addAll (ArrayList.java:581)
at io.openliberty.tools.maven.server.StartDebugMojoSupport.writeJvmOptions (StartDebugMojoSupport.java:378)
at io.openliberty.tools.maven.server.StartDebugMojoSupport.copyConfigFiles (StartDebugMojoSupport.java:207)
at io.openliberty.tools.maven.server.CreateServerMojo.doExecute (CreateServerMojo.java:86)
at org.codehaus.mojo.pluginsupport.MojoSupport.execute (MojoSupport.java:122)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[ERROR]
```
| priority | npe with liberty bootstrap property used in the pom mvn liberty create failed to execute goal io openliberty tools liberty maven plugin snapshot create default cli on project guide getting started null mojoexecutionexception nullpointerexception org apache maven lifecycle lifecycleexecutionexception failed to execute goal io openliberty tools liberty maven plugin snapshot create default cli on project guide getting started null at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java caused by org apache maven plugin mojoexecutionexception at org codehaus mojo pluginsupport mojosupport execute mojosupport java at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java caused by java lang nullpointerexception at java util arraylist addall arraylist java at io openliberty tools maven server startdebugmojosupport writejvmoptions startdebugmojosupport java at io openliberty tools maven server startdebugmojosupport copyconfigfiles startdebugmojosupport java at io openliberty tools maven server createservermojo doexecute createservermojo java at org codehaus mojo pluginsupport mojosupport execute mojosupport java at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java | 1 |
251,427 | 8,015,277,720 | IssuesEvent | 2018-07-25 09:29:46 | exercism/exercism.io | https://api.github.com/repos/exercism/exercism.io | closed | Check filepaths, not just filenames for tests. | area/website priority/high type/bug | It seems like on every submitted Rust exercise page the test suite is not rendered.

| 1.0 | Check filepaths, not just filenames for tests. - It seems like on every submitted Rust exercise page the test suite is not rendered.

| priority | check filepaths not just filenames for tests it seems like on every submitted rust exercise page the test suite is not rendered | 1 |
691,148 | 23,684,460,830 | IssuesEvent | 2022-08-29 04:07:43 | Australian-Genomics/CTRL | https://api.github.com/repos/Australian-Genomics/CTRL | closed | Add a "Glossary" page | priority: high difficulty: medium | This can be used together with the solution to https://github.com/Australian-Genomics/CTRL/issues/19 to allow administrators to create and link to glossaries. | 1.0 | Add a "Glossary" page - This can be used together with the solution to https://github.com/Australian-Genomics/CTRL/issues/19 to allow administrators to create and link to glossaries. | priority | add a glossary page this can be used together with the solution to to allow administrators to create and link to glossaries | 1 |
206,426 | 7,112,275,210 | IssuesEvent | 2018-01-17 16:33:07 | RhoInc/aeexplorer | https://api.github.com/repos/RhoInc/aeexplorer | closed | Prevalence filter not working as expected ... | bug high priority | Last row shouldn't be visible. See project 273.
<img width="805" alt="screen shot 2017-11-06 at 8 31 34 am" src="https://user-images.githubusercontent.com/3680095/32457276-67ed82b2-c2dd-11e7-8ae8-5e2a2a7bfa89.png">
| 1.0 | Prevalence filter not working as expected ... - Last row shouldn't be visible. See project 273.
<img width="805" alt="screen shot 2017-11-06 at 8 31 34 am" src="https://user-images.githubusercontent.com/3680095/32457276-67ed82b2-c2dd-11e7-8ae8-5e2a2a7bfa89.png">
| priority | prevalence filter not working as expected last row shouldn t be visible see project img width alt screen shot at am src | 1 |
587,916 | 17,634,611,360 | IssuesEvent | 2021-08-19 12:22:50 | hazelcast/hazelcast-python-client | https://api.github.com/repos/hazelcast/hazelcast-python-client | closed | Projections | Type: Enhancement Priority: High good first issue | The Python client lacks Projection support for queries.
There are cases where instead of sending all the data returned by a query from a member, you want to transform (strip down) each result object in order to avoid redundant network traffic. For example, you select all employees based on some criteria, but you just want to return their name instead of the whole Employee object. It is easily doable with the Projection API.
For further information, see
https://docs.hazelcast.org/docs/latest-dev/manual/html-single/index.html#projections | 1.0 | Projections - The Python client lacks Projection support for queries.
There are cases where instead of sending all the data returned by a query from a member, you want to transform (strip down) each result object in order to avoid redundant network traffic. For example, you select all employees based on some criteria, but you just want to return their name instead of the whole Employee object. It is easily doable with the Projection API.
For further information, see
https://docs.hazelcast.org/docs/latest-dev/manual/html-single/index.html#projections | priority | projections the python client lacks projection support for queries there are cases where instead of sending all the data returned by a query from a member you want to transform strip down each result object in order to avoid redundant network traffic for example you select all employees based on some criteria but you just want to return their name instead of the whole employee object it is easily doable with the projection api for further information see | 1 |
69,040 | 3,295,136,184 | IssuesEvent | 2015-10-31 17:33:37 | Metaswitch/clearwater-etcd | https://api.github.com/repos/Metaswitch/clearwater-etcd | closed | Chronos never leaves the ETCD cluster when Sprout is decommisioned | bug critical high-priority | During the Clearwater Core spin up new nodes upgrade procedure (elastic scaling), we noticed that after running the sudo service clearwater-etcd decommission command, the sprout node leaves the Memcached cluster okay but didn't leave the Chronos cluster. We had to manually edit the chronos_cluster.conf file and remove the decomm'ed node IP, then load the Chronos configuration back to the etcd cluster before Chronos exited the cluster. Continuing the upgrade without manually correcting this led to new Sprout nodes never joining the cluster properly and had to be forced.
```
Describing the Sprout Memcached cluster in site site1:
The local node is in this cluster
The cluster is stable
192.168.165.98 is in state normal
Describing the Sprout Chronos cluster in site site1:
The local node is in this cluster
The cluster is *not* stable
192.168.165.97 is in state waiting to leave
192.168.165.98 is in state normal
[sprout-2]clearwater@sprout-2:/etc/clearwater$ clearwater-etcdctl cluster-health
cluster is healthy
member 1aaba0552b9a0e13 is healthy
member 714d6c8d81113810 is healthy
member 7c02154e90a0164 is healthy
member a10070fbf0f78081 is healthy
member b2410fc17c380591 is healthy
[sprout-2]clearwater@sprout-2:/etc/chronos$ cat chronos_cluster.conf
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#
# WARNING - THIS FILE IS GENERATED BY ETCD AND SHOULD NOT BE EDITED DIRECTLY
#
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[cluster]
localhost = 192.168.165.98
node = 192.168.165.97
node = 192.168.165.98
sudo vi chronos_cluster.conf
sprout-2]clearwater@sprout-2:/etc/chronos$ cat chronos_cluster.conf
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#
# WARNING - THIS FILE IS GENERATED BY ETCD AND SHOULD NOT BE EDITED DIRECTLY
#
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[cluster]
localhost = 192.168.165.98
node = 192.168.165.98
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_chronos_cluster sprout
cluster-manager/scripts/check_cluster_state
Describing the Sprout Memcached cluster in site site1:
The local node is in this cluster
The cluster is stable
192.168.165.98 is in state normal
Describing the Sprout Chronos cluster in site site1:
The local node is in this cluster
The cluster is stable
192.168.165.98 is in state normal
``` | 1.0 | Chronos never leaves the ETCD cluster when Sprout is decommisioned - During the Clearwater Core spin up new nodes upgrade procedure (elastic scaling), we noticed that after running the sudo service clearwater-etcd decommission command, the sprout node leaves the Memcached cluster okay but didn't leave the Chronos cluster. We had to manually edit the chronos_cluster.conf file and remove the decomm'ed node IP, then load the Chronos configuration back to the etcd cluster before Chronos exited the cluster. Continuing the upgrade without manually correcting this led to new Sprout nodes never joining the cluster properly and had to be forced.
```
Describing the Sprout Memcached cluster in site site1:
The local node is in this cluster
The cluster is stable
192.168.165.98 is in state normal
Describing the Sprout Chronos cluster in site site1:
The local node is in this cluster
The cluster is *not* stable
192.168.165.97 is in state waiting to leave
192.168.165.98 is in state normal
[sprout-2]clearwater@sprout-2:/etc/clearwater$ clearwater-etcdctl cluster-health
cluster is healthy
member 1aaba0552b9a0e13 is healthy
member 714d6c8d81113810 is healthy
member 7c02154e90a0164 is healthy
member a10070fbf0f78081 is healthy
member b2410fc17c380591 is healthy
[sprout-2]clearwater@sprout-2:/etc/chronos$ cat chronos_cluster.conf
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#
# WARNING - THIS FILE IS GENERATED BY ETCD AND SHOULD NOT BE EDITED DIRECTLY
#
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[cluster]
localhost = 192.168.165.98
node = 192.168.165.97
node = 192.168.165.98
sudo vi chronos_cluster.conf
sprout-2]clearwater@sprout-2:/etc/chronos$ cat chronos_cluster.conf
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#
# WARNING - THIS FILE IS GENERATED BY ETCD AND SHOULD NOT BE EDITED DIRECTLY
#
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[cluster]
localhost = 192.168.165.98
node = 192.168.165.98
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_chronos_cluster sprout
cluster-manager/scripts/check_cluster_state
Describing the Sprout Memcached cluster in site site1:
The local node is in this cluster
The cluster is stable
192.168.165.98 is in state normal
Describing the Sprout Chronos cluster in site site1:
The local node is in this cluster
The cluster is stable
192.168.165.98 is in state normal
``` | priority | chronos never leaves the etcd cluster when sprout is decommisioned during the clearwater core spin up new nodes upgrade procedure elastic scaling we noticed that after running the sudo service clearwater etcd decommission command the sprout node leaves the memcached cluster okay but didn t leave the chronos cluster we had to manually edit the chronos cluster conf file and remove the decomm ed node ip then load the chronos configuration back to the etcd cluster before chronos exited the cluster continuing the upgrade without manually correcting this led to new sprout nodes never joining the cluster properly and had to be forced describing the sprout memcached cluster in site the local node is in this cluster the cluster is stable is in state normal describing the sprout chronos cluster in site the local node is in this cluster the cluster is not stable is in state waiting to leave is in state normal clearwater sprout etc clearwater clearwater etcdctl cluster health cluster is healthy member is healthy member is healthy member is healthy member is healthy member is healthy clearwater sprout etc chronos cat chronos cluster conf warning this file is generated by etcd and should not be edited directly localhost node node sudo vi chronos cluster conf sprout clearwater sprout etc chronos cat chronos cluster conf warning this file is generated by etcd and should not be edited directly localhost node usr share clearwater clearwater cluster manager scripts load from chronos cluster sprout cluster manager scripts check cluster state describing the sprout memcached cluster in site the local node is in this cluster the cluster is stable is in state normal describing the sprout chronos cluster in site the local node is in this cluster the cluster is stable is in state normal | 1 |
306,240 | 9,382,811,770 | IssuesEvent | 2019-04-05 00:03:11 | LakeEffectRobotics/LakeEffectScoutingApp | https://api.github.com/repos/LakeEffectRobotics/LakeEffectScoutingApp | closed | Make some form of interim reader | HIGH PRIORITY Reader | Due to removal of radio groups and increased data output, the speadsheets are completely unreadable. Some of the automatic code needs to be replaced by some manual cleanup. | 1.0 | Make some form of interim reader - Due to removal of radio groups and increased data output, the speadsheets are completely unreadable. Some of the automatic code needs to be replaced by some manual cleanup. | priority | make some form of interim reader due to removal of radio groups and increased data output the speadsheets are completely unreadable some of the automatic code needs to be replaced by some manual cleanup | 1 |
591,043 | 17,793,651,140 | IssuesEvent | 2021-08-31 19:16:57 | knative/docs | https://api.github.com/repos/knative/docs | closed | Knative Kafka install link broken | kind/bug priority/high kind/eventing | <!-- If you're reporting a bug with Knative itself, open the bug in the corresponding repo. IE., https://github.com/knative/serving for an issue with serving. -->
<!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. -->
## Expected Behavior
There is a link in this doc page (https://knative.dev/docs/developer/eventing/sources/kafka-source/) to another link that is not valid, for how to install the kafka source for knative:
"Pre-requisites: A Kubernetes cluster with Knative Kafka Source installed."
that link is: https://knative.dev/docs/developer/admin/install/
which I would expect to take me to a place where i could learn how to install the Knative Kafka source.
## Actual Behavior
get a page saying that "That Page is not found"

## Steps to Reproduce the Problem
1. Go to https://knative.dev/docs/developer/admin/install/
1. click on the link to install the Knative Kafka source
| 1.0 | Knative Kafka install link broken - <!-- If you're reporting a bug with Knative itself, open the bug in the corresponding repo. IE., https://github.com/knative/serving for an issue with serving. -->
<!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. -->
## Expected Behavior
There is a link in this doc page (https://knative.dev/docs/developer/eventing/sources/kafka-source/) to another link that is not valid, for how to install the kafka source for knative:
"Pre-requisites: A Kubernetes cluster with Knative Kafka Source installed."
that link is: https://knative.dev/docs/developer/admin/install/
which I would expect to take me to a place where i could learn how to install the Knative Kafka source.
## Actual Behavior
get a page saying that "That Page is not found"

## Steps to Reproduce the Problem
1. Go to https://knative.dev/docs/developer/admin/install/
1. click on the link to install the Knative Kafka source
| priority | knative kafka install link broken expected behavior there is a link in this doc page to another link that is not valid for how to install the kafka source for knative pre requisites a kubernetes cluster with knative kafka source installed that link is which i would expect to take me to a place where i could learn how to install the knative kafka source actual behavior get a page saying that that page is not found steps to reproduce the problem go to click on the link to install the knative kafka source | 1 |
430,809 | 12,466,306,837 | IssuesEvent | 2020-05-28 15:17:08 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Image alignment | Accepted GeoStory Priority: High enhancement | ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
- If not in an immersive content (column), it makes no sense to align an image on the left/right if you cannot put other content inline, make images sections like other components (text, web page)

**What kind of improvement you want to add?** (check one with "x", remove the others)
- [ ] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
| 1.0 | Image alignment - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
- If not in an immersive content (column), it makes no sense to align an image on the left/right if you cannot put other content inline, make images sections like other components (text, web page)

**What kind of improvement you want to add?** (check one with "x", remove the others)
- [ ] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
| priority | image alignment description if not in an immersive content column it makes no sense to align an image on the left right if you cannot put other content inline make images sections like other components text web page what kind of improvement you want to add check one with x remove the others minor changes to existing features code style update formatting local variables refactoring no functional changes no api changes build related changes ci related changes other please describe other useful information | 1 |
239,947 | 7,800,185,126 | IssuesEvent | 2018-06-09 06:05:27 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0007836:
CalDav service does not return reoccuring events correctly | Bug CalDAV Mantis high priority | **Reported by shochdoerfer on 15 Feb 2013 08:24**
**Version:** Joey (2012.10.3)
When a CalDav client requests the events for a given time range it seems that Tine 2.0 does not return the reoccurring events happening in the given time range.
To sync my Tine 2.0 calendar with my Android device I use the App named CalDAV-Sync[1]. As it happens all my events get synced except for the reoccurring events. After discussing this with the author of the application it seems that Tine 2.0 does not return these events when the client application uses the time-range filter. The client request looks like this:
<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<D:getetag />
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20130101T000000Z" end="20131101T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
When using the Long-term sync option of CalDAV-Sync all events get synced, so this really seems to be an issue with the CalDAV service implemented in Tine 2.0. However using the Long-term sync is not an solution as it has to be triggered manually.
[1] http://dmfs.org/caldav/
| 1.0 | 0007836:
CalDav service does not return reoccuring events correctly - **Reported by shochdoerfer on 15 Feb 2013 08:24**
**Version:** Joey (2012.10.3)
When a CalDav client requests the events for a given time range it seems that Tine 2.0 does not return the reoccurring events happening in the given time range.
To sync my Tine 2.0 calendar with my Android device I use the App named CalDAV-Sync[1]. As it happens all my events get synced except for the reoccurring events. After discussing this with the author of the application it seems that Tine 2.0 does not return these events when the client application uses the time-range filter. The client request looks like this:
<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<D:getetag />
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20130101T000000Z" end="20131101T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
When using the Long-term sync option of CalDAV-Sync all events get synced, so this really seems to be an issue with the CalDAV service implemented in Tine 2.0. However using the Long-term sync is not an solution as it has to be triggered manually.
[1] http://dmfs.org/caldav/
| priority | caldav service does not return reoccuring events correctly reported by shochdoerfer on feb version joey when a caldav client requests the events for a given time range it seems that tine does not return the reoccurring events happening in the given time range to sync my tine calendar with my android device i use the app named caldav sync as it happens all my events get synced except for the reoccurring events after discussing this with the author of the application it seems that tine does not return these events when the client application uses the time range filter the client request looks like this lt xml version quot quot encoding quot utf quot gt lt c calendar query xmlns d quot dav quot xmlns c quot urn ietf params xml ns caldav quot gt lt d prop gt lt d getetag gt lt d prop gt lt c filter gt lt c comp filter name quot vcalendar quot gt lt c comp filter name quot vevent quot gt lt c time range start quot quot end quot quot gt lt c comp filter gt lt c comp filter gt lt c filter gt lt c calendar query gt when using the long term sync option of caldav sync all events get synced so this really seems to be an issue with the caldav service implemented in tine however using the long term sync is not an solution as it has to be triggered manually | 1 |
794,466 | 28,037,252,642 | IssuesEvent | 2023-03-28 15:51:10 | AY2223S2-CS2113-T13-1/tp | https://api.github.com/repos/AY2223S2-CS2113-T13-1/tp | closed | DeleteCommand to delete all related currency transaction | type.Story priority.High | E.g. `delete-account SGD` will delete all SGD related transaction in the main list | 1.0 | DeleteCommand to delete all related currency transaction - E.g. `delete-account SGD` will delete all SGD related transaction in the main list | priority | deletecommand to delete all related currency transaction e g delete account sgd will delete all sgd related transaction in the main list | 1 |
539,642 | 15,792,451,862 | IssuesEvent | 2021-04-02 07:07:37 | sopra-fs21-group-16/mth-server | https://api.github.com/repos/sopra-fs21-group-16/mth-server | opened | Write tests for the get mapping endpoint for the scheduling session (scheduling) | high priority task | Write tests for the get mapping endpoint for the scheduling session #5 according to the REST specification table.
/matches/scheduling | GET | | Query | | | 200 OK | List | Once a user A accepts request of user B to start scheduling session, system will get the stored activities of their match | 1.0 | Write tests for the get mapping endpoint for the scheduling session (scheduling) - Write tests for the get mapping endpoint for the scheduling session #5 according to the REST specification table.
/matches/scheduling | GET | | Query | | | 200 OK | List | Once a user A accepts request of user B to start scheduling session, system will get the stored activities of their match | priority | write tests for the get mapping endpoint for the scheduling session scheduling write tests for the get mapping endpoint for the scheduling session according to the rest specification table matches scheduling get query ok list once a user a accepts request of user b to start scheduling session system will get the stored activities of their match | 1 |
388,406 | 11,487,469,840 | IssuesEvent | 2020-02-11 12:01:41 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Grid with columnMenu.columns set to false and filterable set to true throws an error. | Bug C: Grid Kendo1 Next LIB Priority 5 SEV: High | ### Bug report
When columnMenu.columns is set to false, an error is thrown when you try to filter a column in the Grid.
This is a regression introduced in the latest version.
### Reproduction of the problem
1. Open this Dojo example - https://dojo.telerik.com/UYEdUbeG.
2. Try to filter a column.
### Current behavior
An error is thrown.
### Expected/desired behavior
The user should be able to filter the columns.
### Environment
* **Kendo UI version:** 2020.1.114
* **Browser:** [all]
| 1.0 | Grid with columnMenu.columns set to false and filterable set to true throws an error. - ### Bug report
When columnMenu.columns is set to false, an error is thrown when you try to filter a column in the Grid.
This is a regression introduced in the latest version.
### Reproduction of the problem
1. Open this Dojo example - https://dojo.telerik.com/UYEdUbeG.
2. Try to filter a column.
### Current behavior
An error is thrown.
### Expected/desired behavior
The user should be able to filter the columns.
### Environment
* **Kendo UI version:** 2020.1.114
* **Browser:** [all]
| priority | grid with columnmenu columns set to false and filterable set to true throws an error bug report when columnmenu columns is set to false an error is thrown when you try to filter a column in the grid this is a regression introduced in the latest version reproduction of the problem open this dojo example try to filter a column current behavior an error is thrown expected desired behavior the user should be able to filter the columns environment kendo ui version browser | 1 |
766,580 | 26,889,905,055 | IssuesEvent | 2023-02-06 08:04:39 | codersforcauses/wadl | https://api.github.com/repos/codersforcauses/wadl | opened | Investigate Production Bugs | bug priority::high | ## Basic Information
These Bugs were found by Keeley in production.
Admin Institution:
"small bug, on the website if I click on institutions, go back to the main page and then click on institutions again - each time the list of schools duplicates. it resets back to normal amount if webpage is closed and opened"
Admin Edit institution:
"when editing institutions (already entered without codes) app will not allow user to make changes and then save without now adding a code"
| 1.0 | Investigate Production Bugs - ## Basic Information
These Bugs were found by Keeley in production.
Admin Institution:
"small bug, on the website if I click on institutions, go back to the main page and then click on institutions again - each time the list of schools duplicates. it resets back to normal amount if webpage is closed and opened"
Admin Edit institution:
"when editing institutions (already entered without codes) app will not allow user to make changes and then save without now adding a code"
| priority | investigate production bugs basic information these bugs were found by keeley in production admin institution small bug on the website if i click on institutions go back to the main page and then click on institutions again each time the list of schools duplicates it resets back to normal amount if webpage is closed and opened admin edit institution when editing institutions already entered without codes app will not allow user to make changes and then save without now adding a code | 1 |
472,300 | 13,622,453,602 | IssuesEvent | 2020-09-24 03:40:17 | TerryCavanagh/diceydungeons.com | https://api.github.com/repos/TerryCavanagh/diceydungeons.com | closed | Silence does nothing against enemies | High Priority reported in v0.9.1 | Silence affects the player, but not the enemies, because they have no abilities to silence. | 1.0 | Silence does nothing against enemies - Silence affects the player, but not the enemies, because they have no abilities to silence. | priority | silence does nothing against enemies silence affects the player but not the enemies because they have no abilities to silence | 1 |
744,944 | 25,962,169,270 | IssuesEvent | 2022-12-19 01:14:10 | steedos/steedos-platform | https://api.github.com/repos/steedos/steedos-platform | closed | 手机app: 内部官网系统首页和部分对象列表异常 | done priority: High | url地址:https://huayan.steedos.cn
版本号:2.3.0
异常截图:首页报错、列表视图加载不出来
<img width="754" alt="image" src="https://user-images.githubusercontent.com/41402189/202951296-5bd5bdbe-d163-43b6-b677-e5d916b004de.png">
| 1.0 | 手机app: 内部官网系统首页和部分对象列表异常 - url地址:https://huayan.steedos.cn
版本号:2.3.0
异常截图:首页报错、列表视图加载不出来
<img width="754" alt="image" src="https://user-images.githubusercontent.com/41402189/202951296-5bd5bdbe-d163-43b6-b677-e5d916b004de.png">
| priority | 手机app 内部官网系统首页和部分对象列表异常 url地址: 版本号: 异常截图:首页报错、列表视图加载不出来 img width alt image src | 1 |
256,814 | 8,129,310,070 | IssuesEvent | 2018-08-17 14:41:58 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [profile] Exclude spring-webmvc transitive dependency from spring-social-web | CI priority: high quality | Dependency `org.springframework.social:spring-social-web@1.1.6.RELEASE` is including `org.springframework:spring-webmvc@4.1.8.RELEASE` which we aren't using. Please exclude it. | 1.0 | [profile] Exclude spring-webmvc transitive dependency from spring-social-web - Dependency `org.springframework.social:spring-social-web@1.1.6.RELEASE` is including `org.springframework:spring-webmvc@4.1.8.RELEASE` which we aren't using. Please exclude it. | priority | exclude spring webmvc transitive dependency from spring social web dependency org springframework social spring social web release is including org springframework spring webmvc release which we aren t using please exclude it | 1 |
84,005 | 3,645,780,655 | IssuesEvent | 2016-02-15 16:00:48 | littleweaver/littleweaverweb.com | https://api.github.com/repos/littleweaver/littleweaverweb.com | closed | Contact page | decision needed high priority | Right now, the "Get in Touch" link on the home page goes to a contact page that doesn't exist. Something should probably be there.
A lot of businesses have some kind of contact form, but we don't necessarily have to. | 1.0 | Contact page - Right now, the "Get in Touch" link on the home page goes to a contact page that doesn't exist. Something should probably be there.
A lot of businesses have some kind of contact form, but we don't necessarily have to. | priority | contact page right now the get in touch link on the home page goes to a contact page that doesn t exist something should probably be there a lot of businesses have some kind of contact form but we don t necessarily have to | 1 |
650,569 | 21,409,421,631 | IssuesEvent | 2022-04-22 03:04:03 | roq-trading/roq-issues | https://api.github.com/repos/roq-trading/roq-issues | closed | [roq-deribit] Undocumented field for OrderCancelReject (<35> = ' 9') | bug high priority support | The FIX message was
```
8=FIX.4.4|9=182|35=9|49=DERIBITSERVER|56=ROQ_TRADING|34=894|52=20220421-15:04:41.637|41=tgABIQQAAgAAuyQPAmAlDrmvq4itf55i|11=ETH-22630567025|39=2|58=already_filled|151=0|6=3151.200|100010=roq-1-1057|10=114|
```
Deribit hasn't documented custom tag 100010 for this message type ([here](https://docs.deribit.com/#response-on-failure)).
As per client request, a new flag has been added to the roq-deribit gateway to continue for such exception:
```
--fix_continue_from_parse_exception
``` | 1.0 | [roq-deribit] Undocumented field for OrderCancelReject (<35> = ' 9') - The FIX message was
```
8=FIX.4.4|9=182|35=9|49=DERIBITSERVER|56=ROQ_TRADING|34=894|52=20220421-15:04:41.637|41=tgABIQQAAgAAuyQPAmAlDrmvq4itf55i|11=ETH-22630567025|39=2|58=already_filled|151=0|6=3151.200|100010=roq-1-1057|10=114|
```
Deribit hasn't documented custom tag 100010 for this message type ([here](https://docs.deribit.com/#response-on-failure)).
As per client request, a new flag has been added to the roq-deribit gateway to continue for such exception:
```
--fix_continue_from_parse_exception
``` | priority | undocumented field for ordercancelreject the fix message was fix deribitserver roq trading eth already filled roq deribit hasn t documented custom tag for this message type as per client request a new flag has been added to the roq deribit gateway to continue for such exception fix continue from parse exception | 1 |
319,435 | 9,744,125,625 | IssuesEvent | 2019-06-03 05:43:01 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Debug error on Homepage | [Priority: HIGH] | I’ve installed the plugin but it’s not working on homepage giving me this errors:
Notice: Trying to get property ‘ID’ of non-object in C:\inetpub\wwwroot\getupspain.es\wp-content\plugins\accelerated-mobile-pages\includes\features\functions.php on line 174
Notice: Undefined offset: 0 in
C:\inetpub\wwwroot\getupspain.es\wp-includes\class-wp-query.php
on line
3227
I’ve the following system:
PHP Version: 7.2.7
Server Software: Microsoft-IIS/10.0
MySQL: 5.7.24
WordPress Version: 5.1.1
Avada Version: 5.9.1
Thank you!!
The page I need help with: https://www.getupspain.es/amp
REF: https://wordpress.org/support/topic/error-homepage-not-working/ | 1.0 | Debug error on Homepage - I’ve installed the plugin but it’s not working on homepage giving me this errors:
Notice: Trying to get property ‘ID’ of non-object in C:\inetpub\wwwroot\getupspain.es\wp-content\plugins\accelerated-mobile-pages\includes\features\functions.php on line 174
Notice: Undefined offset: 0 in
C:\inetpub\wwwroot\getupspain.es\wp-includes\class-wp-query.php
on line
3227
I’ve the following system:
PHP Version: 7.2.7
Server Software: Microsoft-IIS/10.0
MySQL: 5.7.24
WordPress Version: 5.1.1
Avada Version: 5.9.1
Thank you!!
The page I need help with: https://www.getupspain.es/amp
REF: https://wordpress.org/support/topic/error-homepage-not-working/ | priority | debug error on homepage i’ve installed the plugin but it’s not working on homepage giving me this errors notice trying to get property ‘id’ of non object in c inetpub wwwroot getupspain es wp content plugins accelerated mobile pages includes features functions php on line notice undefined offset in c inetpub wwwroot getupspain es wp includes class wp query php on line i’ve the following system php version server software microsoft iis mysql wordpress version avada version thank you the page i need help with ref | 1 |
277,219 | 8,622,170,063 | IssuesEvent | 2018-11-20 19:29:38 | LetsEatCo/LetsEat | https://api.github.com/repos/LetsEatCo/LetsEat | closed | ✨ View Store Product / Meal content & Add to Cart | priority: high 🔥 | **Is your feature request related to a problem ? Please describe.**
Customers cannot view the content of a Product / Meal
**Describe the solution you'd like**
Implement modals to see Product / Meal content and Add them to Cart
**Describe alternatives you've considered**
None
**Additional context**
None
| 1.0 | ✨ View Store Product / Meal content & Add to Cart - **Is your feature request related to a problem ? Please describe.**
Customers cannot view the content of a Product / Meal
**Describe the solution you'd like**
Implement modals to see Product / Meal content and Add them to Cart
**Describe alternatives you've considered**
None
**Additional context**
None
| priority | ✨ view store product meal content add to cart is your feature request related to a problem please describe customers cannot view the content of a product meal describe the solution you d like implement modals to see product meal content and add them to cart describe alternatives you ve considered none additional context none | 1 |
225,966 | 7,496,775,916 | IssuesEvent | 2018-04-08 13:10:05 | phaazon/spectra | https://api.github.com/repos/phaazon/spectra | closed | Remove cycles from the Store | enhancement priority: high | A cycle breaker was implemented in `render::shader::module`. We need a simplified version based on a `HashSet` hidden in the `Store` to detect cycles and refuse resources that include cycles. | 1.0 | Remove cycles from the Store - A cycle breaker was implemented in `render::shader::module`. We need a simplified version based on a `HashSet` hidden in the `Store` to detect cycles and refuse resources that include cycles. | priority | remove cycles from the store a cycle breaker was implemented in render shader module we need a simplified version based on a hashset hidden in the store to detect cycles and refuse resources that include cycles | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.