Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
855
labels
stringlengths
4
721
body
stringlengths
1
261k
index
stringclasses
13 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
240k
binary_label
int64
0
1
783,692
27,542,285,010
IssuesEvent
2023-03-07 09:23:13
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Remove `createRecordFromMap()` by desugaring the logic internally
Type/Task Priority/High Team/CompilerFE Area/Desugar Deferred
**Description:** With https://github.com/ballerina-platform/ballerina-lang/pull/30525/ we introduced `createRecordFromMap()` `langlib.internal` function to create a record value from a map. Creating a map and calling `value:cloneWithType` would not work since `value:cloneWithType` only supports anydata https://raw.githubusercontent.com/ballerina-platform/ballerina-spec/master/lang/lib/value.bal **Steps to reproduce:** **Affected Versions:** **OS, DB, other environment details and versions:** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
1.0
Remove `createRecordFromMap()` by desugaring the logic internally - **Description:** With https://github.com/ballerina-platform/ballerina-lang/pull/30525/ we introduced `createRecordFromMap()` `langlib.internal` function to create a record value from a map. Creating a map and calling `value:cloneWithType` would not work since `value:cloneWithType` only supports anydata https://raw.githubusercontent.com/ballerina-platform/ballerina-spec/master/lang/lib/value.bal **Steps to reproduce:** **Affected Versions:** **OS, DB, other environment details and versions:** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
priority
remove createrecordfrommap by desugaring the logic internally description with we introduced createrecordfrommap langlib internal function to create a record value from a map creating a map and calling value clonewithtype would not work since value clonewithtype only supports anydata steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
1
792,888
27,976,476,191
IssuesEvent
2023-03-25 16:46:38
AY2223S2-CS2103T-T14-3/tp
https://api.github.com/repos/AY2223S2-CS2103T-T14-3/tp
opened
Update Developer Guide
priority.High
To update the developer guide: - Introduction - Include GUI information - Add in feature implementation for `upcoming` and `copy`. Update `delete` and `find` accordingly. - Add in use cases - Add in test cases for manual testing
1.0
Update Developer Guide - To update the developer guide: - Introduction - Include GUI information - Add in feature implementation for `upcoming` and `copy`. Update `delete` and `find` accordingly. - Add in use cases - Add in test cases for manual testing
priority
update developer guide to update the developer guide introduction include gui information add in feature implementation for upcoming and copy update delete and find accordingly add in use cases add in test cases for manual testing
1
711,742
24,473,541,551
IssuesEvent
2022-10-07 23:43:51
zulip/zulip
https://api.github.com/repos/zulip/zulip
closed
Improve app title bar for stream and topic views
help wanted area: misc in progress priority: high
We should make the following changes in the app title bar: - [ ] For stream views, display the stream name as `#stream`, rather than `stream`. - [ ] For topic views, display the topic as `#stream > topic`, not just `topic`. This will make the title bar more clear and consistent with other parts of the Zulip app. [CZO discussion](https://chat.zulip.org/#narrow/stream/101-design/topic/app.20title.20bar/near/1434948) Current UI (note that one can't tell a stream view from a topic view): ![Screen Shot 2022-09-15 at 11 47 16 AM](https://user-images.githubusercontent.com/2090066/190485162-3703acb8-4496-44bb-a922-9661cde827fd.png) ![Screen Shot 2022-09-15 at 11 47 06 AM](https://user-images.githubusercontent.com/2090066/190485164-b8766d13-4837-454b-84b1-4b0c028f019a.png)
1.0
Improve app title bar for stream and topic views - We should make the following changes in the app title bar: - [ ] For stream views, display the stream name as `#stream`, rather than `stream`. - [ ] For topic views, display the topic as `#stream > topic`, not just `topic`. This will make the title bar more clear and consistent with other parts of the Zulip app. [CZO discussion](https://chat.zulip.org/#narrow/stream/101-design/topic/app.20title.20bar/near/1434948) Current UI (note that one can't tell a stream view from a topic view): ![Screen Shot 2022-09-15 at 11 47 16 AM](https://user-images.githubusercontent.com/2090066/190485162-3703acb8-4496-44bb-a922-9661cde827fd.png) ![Screen Shot 2022-09-15 at 11 47 06 AM](https://user-images.githubusercontent.com/2090066/190485164-b8766d13-4837-454b-84b1-4b0c028f019a.png)
priority
improve app title bar for stream and topic views we should make the following changes in the app title bar for stream views display the stream name as stream rather than stream for topic views display the topic as stream topic not just topic this will make the title bar more clear and consistent with other parts of the zulip app current ui note that one can t tell a stream view from a topic view
1
239,973
7,800,247,239
IssuesEvent
2018-06-09 06:59:06
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
closed
0008136: create accounts in update script
Bug HumanResources Mantis high priority
**Reported by astintzing on 4 Apr 2013 10:35** create accounts in update script
1.0
0008136: create accounts in update script - **Reported by astintzing on 4 Apr 2013 10:35** create accounts in update script
priority
create accounts in update script reported by astintzing on apr create accounts in update script
1
495,558
14,283,969,220
IssuesEvent
2020-11-23 11:48:44
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
yandex.ru - see bug description
browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
<!-- @browser: Firefox Mobile 85.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/62336 --> <!-- @extra_labels: browser-fenix --> **URL**: https://yandex.ru/ **Browser / Version**: Firefox Mobile 85.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: not opening **Steps to Reproduce**: <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/092128ae-2e63-49f1-a12b-c503b29e95ce.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201122093438</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/11/ce7500c9-d73a-46e3-bb05-ef19a3cd2f0c) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
yandex.ru - see bug description - <!-- @browser: Firefox Mobile 85.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/62336 --> <!-- @extra_labels: browser-fenix --> **URL**: https://yandex.ru/ **Browser / Version**: Firefox Mobile 85.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: not opening **Steps to Reproduce**: <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/092128ae-2e63-49f1-a12b-c503b29e95ce.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201122093438</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/11/ce7500c9-d73a-46e3-bb05-ef19a3cd2f0c) _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
yandex ru see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description not opening steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
1
152,562
5,849,169,235
IssuesEvent
2017-05-10 22:54:43
Polymer/polymer-bundler
https://api.github.com/repos/Polymer/polymer-bundler
closed
Lazy Imports not getting resolved correctly in bundles
Priority: High Status: Review Needed Type: Bug
**Steps to Reproduce:** 1. Clone https://github.com/jay8t6/lazy-import-bundle 2. Run `npm i;bower i` 3. Run `npm run build` 4. Run `polymer serve build/ -o` 5. Open console **Expected Result:** `paper-button` will lazily import into the side bar **Actual Result:** `paper-button` doesn't import. You can see in the console that the import failed and resolved to `bower_components/bower_components/paper-button/paper-button.html` @usergenic
1.0
Lazy Imports not getting resolved correctly in bundles - **Steps to Reproduce:** 1. Clone https://github.com/jay8t6/lazy-import-bundle 2. Run `npm i;bower i` 3. Run `npm run build` 4. Run `polymer serve build/ -o` 5. Open console **Expected Result:** `paper-button` will lazily import into the side bar **Actual Result:** `paper-button` doesn't import. You can see in the console that the import failed and resolved to `bower_components/bower_components/paper-button/paper-button.html` @usergenic
priority
lazy imports not getting resolved correctly in bundles steps to reproduce clone run npm i bower i run npm run build run polymer serve build o open console expected result paper button will lazily import into the side bar actual result paper button doesn t import you can see in the console that the import failed and resolved to bower components bower components paper button paper button html usergenic
1
726,490
25,000,784,634
IssuesEvent
2022-11-03 07:40:39
submariner-io/enhancements
https://api.github.com/repos/submariner-io/enhancements
closed
Epic: Support OCP with submariner on VMWare
enhancement priority:high
**What would you like to be added**: Proposal is to fully support connecting multiple OCP clusters on VMWare using Submariner. **Why is this needed**: Aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments. **UPDATE** Investigations have confirmed that there is nothing that needs to be done to prepare cloud for VM Ware, in subctl or OCM. Expectation is that VMWare should work without any changes, except when using NSX which is for now outside scope of this EPIC. **Work Items** Based on results of investigations, only effort required is testing. - [ ] Add VMWare to Submariner Testday - [ ] Run QE on VMWare
1.0
Epic: Support OCP with submariner on VMWare - **What would you like to be added**: Proposal is to fully support connecting multiple OCP clusters on VMWare using Submariner. **Why is this needed**: Aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments. **UPDATE** Investigations have confirmed that there is nothing that needs to be done to prepare cloud for VM Ware, in subctl or OCM. Expectation is that VMWare should work without any changes, except when using NSX which is for now outside scope of this EPIC. **Work Items** Based on results of investigations, only effort required is testing. - [ ] Add VMWare to Submariner Testday - [ ] Run QE on VMWare
priority
epic support ocp with submariner on vmware what would you like to be added proposal is to fully support connecting multiple ocp clusters on vmware using submariner why is this needed aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments update investigations have confirmed that there is nothing that needs to be done to prepare cloud for vm ware in subctl or ocm expectation is that vmware should work without any changes except when using nsx which is for now outside scope of this epic work items based on results of investigations only effort required is testing add vmware to submariner testday run qe on vmware
1
310,588
9,516,196,635
IssuesEvent
2019-04-26 08:14:12
status-im/status-react
https://api.github.com/repos/status-im/status-react
opened
Delete options "Fetch 48-60h", "Fetch 84-96h" from release
high-priority release
With https://github.com/status-im/status-react/pull/8025 accidentally to `release` branch were included options "Fetch 48-60h", "Fetch 84-96h". Please remove it and merge this PR to `develop` and to `release` branches
1.0
Delete options "Fetch 48-60h", "Fetch 84-96h" from release - With https://github.com/status-im/status-react/pull/8025 accidentally to `release` branch were included options "Fetch 48-60h", "Fetch 84-96h". Please remove it and merge this PR to `develop` and to `release` branches
priority
delete options fetch fetch from release with accidentally to release branch were included options fetch fetch please remove it and merge this pr to develop and to release branches
1
484,335
13,938,172,759
IssuesEvent
2020-10-22 14:57:57
enso-org/ide
https://api.github.com/repos/enso-org/ide
opened
It's not possible to read file by selecting methods in searcher
Priority: High Type: Bug
<!-- Please ensure that you are using the latest version of Enso IDE before reporting the bug! It may have been fixed since. --> ### General Summary <!-- - Please include a high-level description of your bug here. --> In specification the proper way of reading file from the data folder in the project directory is: ``` file1 = Enso_Project.data / "sample.json" content1 = file1.read ``` or if user want to use absolute path: ``` file2 = File.new "/foo/bar/baz/sample.json" content2 = file2.read ``` Unfortunately it is not possible to read file by using searcher and selecting `read` method in either of those cases ### Steps to Reproduce <!-- Please list the reproduction steps for your bug. --> Open IDE add node ` File.new "/foo/bar/baz/sample.json"` with proper filepath to existing file open searcher with first node selected ### Expected Result <!-- - A description of the results you expected from the reproduction steps. --> hint for reading file available ### Actual Result <!-- - A description of what actually happens when you perform these steps. - Please include any error output if relevant. --> hints for text or empty searcher ### Enso Version <!-- - Please include the version of Enso IDE you are using here. --> core : 2.0.0-alpha.0 build : 51e09ff electron : 8.1.1 chrome : 80.0.3987.141
1.0
It's not possible to read file by selecting methods in searcher - <!-- Please ensure that you are using the latest version of Enso IDE before reporting the bug! It may have been fixed since. --> ### General Summary <!-- - Please include a high-level description of your bug here. --> In specification the proper way of reading file from the data folder in the project directory is: ``` file1 = Enso_Project.data / "sample.json" content1 = file1.read ``` or if user want to use absolute path: ``` file2 = File.new "/foo/bar/baz/sample.json" content2 = file2.read ``` Unfortunately it is not possible to read file by using searcher and selecting `read` method in either of those cases ### Steps to Reproduce <!-- Please list the reproduction steps for your bug. --> Open IDE add node ` File.new "/foo/bar/baz/sample.json"` with proper filepath to existing file open searcher with first node selected ### Expected Result <!-- - A description of the results you expected from the reproduction steps. --> hint for reading file available ### Actual Result <!-- - A description of what actually happens when you perform these steps. - Please include any error output if relevant. --> hints for text or empty searcher ### Enso Version <!-- - Please include the version of Enso IDE you are using here. --> core : 2.0.0-alpha.0 build : 51e09ff electron : 8.1.1 chrome : 80.0.3987.141
priority
it s not possible to read file by selecting methods in searcher please ensure that you are using the latest version of enso ide before reporting the bug it may have been fixed since general summary please include a high level description of your bug here in specification the proper way of reading file from the data folder in the project directory is enso project data sample json read or if user want to use absolute path file new foo bar baz sample json read unfortunately it is not possible to read file by using searcher and selecting read method in either of those cases steps to reproduce please list the reproduction steps for your bug open ide add node file new foo bar baz sample json with proper filepath to existing file open searcher with first node selected expected result a description of the results you expected from the reproduction steps hint for reading file available actual result a description of what actually happens when you perform these steps please include any error output if relevant hints for text or empty searcher enso version please include the version of enso ide you are using here core alpha build electron chrome
1
236,712
7,752,129,914
IssuesEvent
2018-05-30 19:16:46
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
USER ISSUE: World creation fails
High Priority
**Version:** 0.7.3.3 beta **Steps to Reproduce:** 1. Start the game on Linux using Steam 2. Create a new World and tick "Generate New" 3. Click "Start" Sad that the game still does not work, I bought the game half a year ago, and it still does not work and has many bugs. **Expected behavior:** Generate a world I guess? **Actual behavior:** Error: Could not start server: Object reference not set to an instance of an object
1.0
USER ISSUE: World creation fails - **Version:** 0.7.3.3 beta **Steps to Reproduce:** 1. Start the game on Linux using Steam 2. Create a new World and tick "Generate New" 3. Click "Start" Sad that the game still does not work, I bought the game half a year ago, and it still does not work and has many bugs. **Expected behavior:** Generate a world I guess? **Actual behavior:** Error: Could not start server: Object reference not set to an instance of an object
priority
user issue world creation fails version beta steps to reproduce start the game on linux using steam create a new world and tick generate new click start sad that the game still does not work i bought the game half a year ago and it still does not work and has many bugs expected behavior generate a world i guess actual behavior error could not start server object reference not set to an instance of an object
1
428,291
12,406,027,490
IssuesEvent
2020-05-21 18:21:58
ClinGen/clincoded
https://api.github.com/repos/ClinGen/clincoded
closed
Add ClinVar Submitter IDs to VCEP affiliations
VCI affiliation backend feature priority: high
For all VCEP affiliations we need to find out and store their ClinVar submitter IDs in their affiliation files.
1.0
Add ClinVar Submitter IDs to VCEP affiliations - For all VCEP affiliations we need to find out and store their ClinVar submitter IDs in their affiliation files.
priority
add clinvar submitter ids to vcep affiliations for all vcep affiliations we need to find out and store their clinvar submitter ids in their affiliation files
1
439,280
12,680,613,246
IssuesEvent
2020-06-19 14:00:38
mlr-org/mlr3hyperband
https://api.github.com/repos/mlr-org/mlr3hyperband
closed
Implement private .assign_result function
Priority: High
The optimizer class offers an automatic way to assign the result. It just chooses the evaluation that gave the best result. However, for hyperband you might not want to return a result that is based on a lower fidelity.
1.0
Implement private .assign_result function - The optimizer class offers an automatic way to assign the result. It just chooses the evaluation that gave the best result. However, for hyperband you might not want to return a result that is based on a lower fidelity.
priority
implement private assign result function the optimizer class offers an automatic way to assign the result it just chooses the evaluation that gave the best result however for hyperband you might not want to return a result that is based on a lower fidelity
1
269,533
8,439,909,552
IssuesEvent
2018-10-18 04:34:38
CS2103-AY1819S1-F11-1/main
https://api.github.com/repos/CS2103-AY1819S1-F11-1/main
closed
Correct mistakes in V1.1 review
priority.High status.Ongoing type.Bug
Do feel free to add on #82 (solves below): ----- ![image](https://user-images.githubusercontent.com/35895966/46745480-b61e1c00-ccdf-11e8-9a5a-0ff64ab4f7e7.png) ![image](https://user-images.githubusercontent.com/35895966/46745504-c0401a80-ccdf-11e8-88b4-22df0fa7a790.png) ![image](https://user-images.githubusercontent.com/35895966/46745543-d9e16200-ccdf-11e8-8a97-9aa001ba9b8d.png) Other things to correct (if not done yet): ------- ![image](https://user-images.githubusercontent.com/35895966/46745611-f9788a80-ccdf-11e8-9feb-64b404cbd89c.png) ![image](https://user-images.githubusercontent.com/35895966/46745621-00070200-cce0-11e8-9a6b-f41fc5bc31b9.png) ![image](https://user-images.githubusercontent.com/35895966/46745633-06957980-cce0-11e8-8106-85d97d3d31b6.png)
1.0
Correct mistakes in V1.1 review - Do feel free to add on #82 (solves below): ----- ![image](https://user-images.githubusercontent.com/35895966/46745480-b61e1c00-ccdf-11e8-9a5a-0ff64ab4f7e7.png) ![image](https://user-images.githubusercontent.com/35895966/46745504-c0401a80-ccdf-11e8-88b4-22df0fa7a790.png) ![image](https://user-images.githubusercontent.com/35895966/46745543-d9e16200-ccdf-11e8-8a97-9aa001ba9b8d.png) Other things to correct (if not done yet): ------- ![image](https://user-images.githubusercontent.com/35895966/46745611-f9788a80-ccdf-11e8-9feb-64b404cbd89c.png) ![image](https://user-images.githubusercontent.com/35895966/46745621-00070200-cce0-11e8-9a6b-f41fc5bc31b9.png) ![image](https://user-images.githubusercontent.com/35895966/46745633-06957980-cce0-11e8-8106-85d97d3d31b6.png)
priority
correct mistakes in review do feel free to add on solves below other things to correct if not done yet
1
275,504
8,576,376,194
IssuesEvent
2018-11-12 20:11:16
jsheroes/jsheroes.io
https://api.github.com/repos/jsheroes/jsheroes.io
closed
Code of Conduct redesign
good first issue high-priority
Redesigned the Code of Conduct page, replaced "Cluj JavaScripters" with "JSHeroes" in the text. Attached: - the mockups - the photo used - the PDF with the long version & corrected text: [code-of-conduct-jsheroes-long-version.pdf](https://github.com/jsheroes/jsheroes.io/files/2461837/code-of-conduct-jsheroes-long-version.pdf) Here is the text on the mockups: ### **The JSHeroes Code of Conduct** Below is a short version of our Code of Conduct. You can download the full length document [here](https://drive.google.com/file/d/0B9mBUlTNHZJNRnhGWE5LRERud00/edit). **The short version** **All attendees, speakers, sponsors and volunteers at our conference are required to agree with the following code of conduct.** As organisers of JSHeroes, the JSHeroes community will enforce this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody. **The JSHeroes community commits** to providing a safe and friendly facility for all JSHeroes events, respecting each and every individual in the community and responding promptly to all reports of misconduct with our full attention. **What we expect from each of you** is to treat everyone with respect, refrain from using offensive language and imagery, and to report any derogatory or offensive behaviour to a member of the JSHeroes community or JSHeroes staff directly, or through email at [welcome@jsheroes.io](mailto:welcome@jsheroes.io) We value your attendance and your participation in the JSHeroes community and expect everyone to accord to the community Code of Conduct at all JSHeroes venues and events. ![code-of-conduct-desktop-1920px](https://user-images.githubusercontent.com/210512/46612387-0b5c0100-cb19-11e8-8d79-ef899937b833.png) ![code-of-conduct-mobile-360px](https://user-images.githubusercontent.com/210512/46612388-0b5c0100-cb19-11e8-936c-1e331f97d31e.png) ![jsheroes-code-of-conduct](https://user-images.githubusercontent.com/210512/46612891-71955380-cb1a-11e8-8ece-274bf33fd03d.jpg)
1.0
Code of Conduct redesign - Redesigned the Code of Conduct page, replaced "Cluj JavaScripters" with "JSHeroes" in the text. Attached: - the mockups - the photo used - the PDF with the long version & corrected text: [code-of-conduct-jsheroes-long-version.pdf](https://github.com/jsheroes/jsheroes.io/files/2461837/code-of-conduct-jsheroes-long-version.pdf) Here is the text on the mockups: ### **The JSHeroes Code of Conduct** Below is a short version of our Code of Conduct. You can download the full length document [here](https://drive.google.com/file/d/0B9mBUlTNHZJNRnhGWE5LRERud00/edit). **The short version** **All attendees, speakers, sponsors and volunteers at our conference are required to agree with the following code of conduct.** As organisers of JSHeroes, the JSHeroes community will enforce this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody. **The JSHeroes community commits** to providing a safe and friendly facility for all JSHeroes events, respecting each and every individual in the community and responding promptly to all reports of misconduct with our full attention. **What we expect from each of you** is to treat everyone with respect, refrain from using offensive language and imagery, and to report any derogatory or offensive behaviour to a member of the JSHeroes community or JSHeroes staff directly, or through email at [welcome@jsheroes.io](mailto:welcome@jsheroes.io) We value your attendance and your participation in the JSHeroes community and expect everyone to accord to the community Code of Conduct at all JSHeroes venues and events. ![code-of-conduct-desktop-1920px](https://user-images.githubusercontent.com/210512/46612387-0b5c0100-cb19-11e8-8d79-ef899937b833.png) ![code-of-conduct-mobile-360px](https://user-images.githubusercontent.com/210512/46612388-0b5c0100-cb19-11e8-936c-1e331f97d31e.png) ![jsheroes-code-of-conduct](https://user-images.githubusercontent.com/210512/46612891-71955380-cb1a-11e8-8ece-274bf33fd03d.jpg)
priority
code of conduct redesign redesigned the code of conduct page replaced cluj javascripters with jsheroes in the text attached the mockups the photo used the pdf with the long version corrected text here is the text on the mockups the jsheroes code of conduct below is a short version of our code of conduct you can download the full length document the short version all attendees speakers sponsors and volunteers at our conference are required to agree with the following code of conduct as organisers of jsheroes the jsheroes community will enforce this code throughout the event we expect cooperation from all participants to help ensure a safe environment for everybody the jsheroes community commits to providing a safe and friendly facility for all jsheroes events respecting each and every individual in the community and responding promptly to all reports of misconduct with our full attention what we expect from each of you is to treat everyone with respect refrain from using offensive language and imagery and to report any derogatory or offensive behaviour to a member of the jsheroes community or jsheroes staff directly or through email at mailto welcome jsheroes io we value your attendance and your participation in the jsheroes community and expect everyone to accord to the community code of conduct at all jsheroes venues and events
1
101,857
4,146,806,117
IssuesEvent
2016-06-15 02:26:18
benbaptist/minecraft-wrapper
https://api.github.com/repos/benbaptist/minecraft-wrapper
closed
Don't understand why api.getStorage objects fluctuate
high priority
I am dealing with one object where I am storing warp data.. and it is constantly fluctuating.. items i deleted a day ago are back in the list.. and at any given moment, the list changes (I am the only one who can add/delete warps).. all the api.getstorages do this... why? Without any input from me, it changes from this: ''' {"warps": {"p2": [31883.323637433674, 67.0, 25338.20901793392], "portals": [3270.5307348580477, 103.0, 2669.668092340251, "staff", -90.60082244873047, -4.050037384033203, false], "p1": [31826.95201823558, 71.0, 25412.453195705904], "stronghold1": [741.2230532498089, 33.0, 0.6999999880790738], "OProom": [235.84977598355078, 56.0, -63.3245829276579], "Jeeskarox": [8117.1371682263925, 68.15457878522446, 63.444568633883726], "alex_chestroom": [3017.820089214752, 50.83829140374222, 8739.171687385], "myregion": [1521.9157604505936, 67.22955103477543, 301.3688044397513], "nether_pearltest": [115.53734140378518, 102.0, 83.46924722296535], "spawntest": [3271.685257982652, 103.0, 2673.5810949049383, "staff"], "swamp": [-464.845840532287, 84.31968973049136, -29.682383729993838], "wason4-1": [-13811.410204106456, 81.15052866660584, 9933.376722944515], "mystiques": [-5388.5, 72.0, -10011.5], "jail": [3263.5089288523445, 98.0, 2662.511479645375], "water_temple": [197.30000001192093, 44.0, -5605.519227954707], "hacker_hell": [7483.5, 0.8873451333870699, 7817.5], "zombieSpawner": [1614.5325514656352, 11.419156430429029, 580.6350509094153], "jacklare": [-1407.0923962382235, 70.7393572271209, 4541.348465524757]}}''' to this: ''' {"warps": {"OProom": [235.84977598355078, 56.0, -63.3245829276579], "alex_chestroom": [3017.820089214752, 50.83829140374222, 8739.171687385], "hacker_hell": [7483.5, 0.8873451333870699, 7817.5], "jacklare": [-1407.0923962382235, 70.7393572271209, 4541.348465524757]}}''' to this: ''' {"warps": {"alex_chestroom": [3017.820089214752, 50.83829140374222, 8739.171687385], "hacker_hell": [7483.5, 0.8873451333870699, 7817.5], "stronghold1": [740.1556378918563, 33.0, 1.5982858292301847, "staff", 186.7499542236328, 34.949989318847656, false], "jacklare": [-1396.0239155914612, 67.0, 4533.811520126322, "staff", 72.29999542236328, 25.149999618530273, false]}}'''
1.0
Don't understand why api.getStorage objects fluctuate - I am dealing with one object where I am storing warp data.. and it is constantly fluctuating.. items i deleted a day ago are back in the list.. and at any given moment, the list changes (I am the only one who can add/delete warps).. all the api.getstorages do this... why? Without any input from me, it changes from this: ''' {"warps": {"p2": [31883.323637433674, 67.0, 25338.20901793392], "portals": [3270.5307348580477, 103.0, 2669.668092340251, "staff", -90.60082244873047, -4.050037384033203, false], "p1": [31826.95201823558, 71.0, 25412.453195705904], "stronghold1": [741.2230532498089, 33.0, 0.6999999880790738], "OProom": [235.84977598355078, 56.0, -63.3245829276579], "Jeeskarox": [8117.1371682263925, 68.15457878522446, 63.444568633883726], "alex_chestroom": [3017.820089214752, 50.83829140374222, 8739.171687385], "myregion": [1521.9157604505936, 67.22955103477543, 301.3688044397513], "nether_pearltest": [115.53734140378518, 102.0, 83.46924722296535], "spawntest": [3271.685257982652, 103.0, 2673.5810949049383, "staff"], "swamp": [-464.845840532287, 84.31968973049136, -29.682383729993838], "wason4-1": [-13811.410204106456, 81.15052866660584, 9933.376722944515], "mystiques": [-5388.5, 72.0, -10011.5], "jail": [3263.5089288523445, 98.0, 2662.511479645375], "water_temple": [197.30000001192093, 44.0, -5605.519227954707], "hacker_hell": [7483.5, 0.8873451333870699, 7817.5], "zombieSpawner": [1614.5325514656352, 11.419156430429029, 580.6350509094153], "jacklare": [-1407.0923962382235, 70.7393572271209, 4541.348465524757]}}''' to this: ''' {"warps": {"OProom": [235.84977598355078, 56.0, -63.3245829276579], "alex_chestroom": [3017.820089214752, 50.83829140374222, 8739.171687385], "hacker_hell": [7483.5, 0.8873451333870699, 7817.5], "jacklare": [-1407.0923962382235, 70.7393572271209, 4541.348465524757]}}''' to this: ''' {"warps": {"alex_chestroom": [3017.820089214752, 50.83829140374222, 8739.171687385], "hacker_hell": [7483.5, 0.8873451333870699, 7817.5], "stronghold1": [740.1556378918563, 33.0, 1.5982858292301847, "staff", 186.7499542236328, 34.949989318847656, false], "jacklare": [-1396.0239155914612, 67.0, 4533.811520126322, "staff", 72.29999542236328, 25.149999618530273, false]}}'''
priority
don t understand why api getstorage objects fluctuate i am dealing with one object where i am storing warp data and it is constantly fluctuating items i deleted a day ago are back in the list and at any given moment the list changes i am the only one who can add delete warps all the api getstorages do this why without any input from me it changes from this warps portals oproom jeeskarox alex chestroom myregion nether pearltest spawntest swamp mystiques jail water temple hacker hell zombiespawner jacklare to this warps oproom alex chestroom hacker hell jacklare to this warps alex chestroom hacker hell jacklare
1
285,886
8,780,576,899
IssuesEvent
2018-12-19 17:43:22
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
opened
[8.0.0 Steam Beta] Missing Icons
High Priority
Missing following icons: AdvancedBakingFocusedWorkflowTalentGroup, AdvancedBakingFrugalWorkspaceTalentGroup, AdvancedBakingLavishWorkspaceTalentGroup, AdvancedBakingParrallelProcessingTalentGroup, AdvancedCampfireCookingFocusedWorkflowTalentGroup, AdvancedCampfireCookingFrugalWorkspaceTalentGroup, AdvancedCampfireCookingLavishWorkspaceTalentGroup, AdvancedCampfireCookingParrallelProcessingTalentGroup, AdvancedCookingFocusedWorkflowTalentGroup, AdvancedCookingFrugalWorkspaceTalentGroup, AdvancedCookingLavishWorkspaceTalentGroup, AdvancedCookingParrallelProcessingTalentGroup, AdvancedSmeltingFocusedWorkflowTalentGroup, AdvancedSmeltingFrugalWorkspaceTalentGroup, AdvancedSmeltingLavishWorkspaceTalentGroup, AdvancedSmeltingParrallelProcessingTalentGroup, BakingFocusedWorkflowTalentGroup, BakingFrugalWorkspaceTalentGroup, BakingLavishWorkspaceTalentGroup, BakingParrallelProcessingTalentGroup, BasicEngineeringFocusedWorkflowTalentGroup, BasicEngineeringFrugalWorkspaceTalentGroup, BasicEngineeringLavishWorkspaceTalentGroup, BasicEngineeringParrallelProcessingTalentGroup, BricklayingFocusedWorkflowTalentGroup, BricklayingFrugalWorkspaceTalentGroup, BricklayingLavishWorkspaceTalentGroup, BricklayingParrallelProcessingTalentGroup, ButcheryFocusedWorkflowTalentGroup, ButcheryFrugalWorkspaceTalentGroup, ButcheryLavishWorkspaceTalentGroup, ButcheryParrallelProcessingTalentGroup, CampfireFocusedWorkflowTalentGroup, CampfireFrugalWorkspaceTalentGroup, CampfireLavishWorkspaceTalentGroup, CampfireParrallelProcessingTalentGroup, CementFocusedWorkflowTalentGroup, CementFrugalWorkspaceTalentGroup, CementLavishWorkspaceTalentGroup, CementParrallelProcessingTalentGroup, CookingFocusedWorkflowTalentGroup, CookingFrugalWorkspaceTalentGroup, CookingLavishWorkspaceTalentGroup, CookingParrallelProcessingTalentGroup, CuttingEdgeCookingFocusedWorkflowTalentGroup, CuttingEdgeCookingFrugalWorkspaceTalentGroup, CuttingEdgeCookingLavishWorkspaceTalentGroup, CuttingEdgeCookingParrallelProcessingTalentGroup, ElectronicsFocusedWorkflowTalentGroup, ElectronicsFrugalWorkspaceTalentGroup, ElectronicsLavishWorkspaceTalentGroup, ElectronicsParrallelProcessingTalentGroup, FertilizersFocusedWorkflowTalentGroup, FertilizersFrugalWorkspaceTalentGroup, FertilizersLavishWorkspaceTalentGroup, FertilizersParrallelProcessingTalentGroup, GatheringExperiencedFarmhandTalentGroup, GatheringNaturalGathererTalentGroup, GatheringToolEfficiencyTalentGroup, GatheringToolStrengthTalentGroup, GlassworkingFocusedWorkflowTalentGroup, GlassworkingFrugalWorkspaceTalentGroup, GlassworkingLavishWorkspaceTalentGroup, GlassworkingParrallelProcessingTalentGroup, HewingFocusedWorkflowTalentGroup, HewingFrugalWorkspaceTalentGroup, HewingLavishWorkspaceTalentGroup, HewingParrallelProcessingTalentGroup, HuntingAggressiveAnglerTalentGroup, HuntingArrowRecoveryTalentGroup, HuntingDeadeyeTalentGroup, HuntingFishermanTalentGroup, IndustryFocusedWorkflowTalentGroup, IndustryFrugalWorkspaceTalentGroup, IndustryLavishWorkspaceTalentGroup, IndustryParrallelProcessingTalentGroup, LoggingCleanupCrewTalentGroup, LoggingLoggersLuckTalentGroup, LoggingToolEfficiencyTalentGroup, LoggingToolStrengthTalentGroup, LumberFocusedWorkflowTalentGroup, LumberFrugalWorkspaceTalentGroup, LumberLavishWorkspaceTalentGroup, LumberParrallelProcessingTalentGroup, MechanicsFocusedWorkflowTalentGroup, MechanicsFrugalWorkspaceTalentGroup, MechanicsLavishWorkspaceTalentGroup, MechanicsParrallelProcessingTalentGroup, MetalConstructionFocusedWorkflowTalentGroup, MetalConstructionFrugalWorkspaceTalentGroup, MetalConstructionLavishWorkspaceTalentGroup, MetalConstructionParrallelProcessingTalentGroup, MillingFocusedWorkflowTalentGroup, MillingFrugalWorkspaceTalentGroup, MillingLavishWorkspaceTalentGroup, MillingParrallelProcessingTalentGroup, MiningLuckyBreakTalentGroup, MiningSweepingHandsTalentGroup, MiningToolEfficiencyTalentGroup, MiningToolStrengthTalentGroup, MortaringFocusedWorkflowTalentGroup, MortaringFrugalWorkspaceTalentGroup, MortaringLavishWorkspaceTalentGroup, MortaringParrallelProcessingTalentGroup, OilDrillingFocusedWorkflowTalentGroup, OilDrillingFrugalWorkspaceTalentGroup, OilDrillingLavishWorkspaceTalentGroup, OilDrillingParrallelProcessingTalentGroup, PaperMillingFocusedWorkflowTalentGroup, PaperMillingFrugalWorkspaceTalentGroup, PaperMillingLavishWorkspaceTalentGroup, PaperMillingParrallelProcessingTalentGroup, SelfImprovementDiverTalentGroup, SelfImprovementNatureAdventurerTalentGroup, SelfImprovementSprinterTalentGroup, SelfImprovementUrbanTravellerTalentGroup, SmeltingFocusedWorkflowTalentGroup, SmeltingFrugalWorkspaceTalentGroup, SmeltingLavishWorkspaceTalentGroup, SmeltingParrallelProcessingTalentGroup, TailoringFocusedWorkflowTalentGroup, TailoringFrugalWorkspaceTalentGroup, TailoringLavishWorkspaceTalentGroup, TailoringParrallelProcessingTalentGroup
1.0
[8.0.0 Steam Beta] Missing Icons - Missing following icons: AdvancedBakingFocusedWorkflowTalentGroup, AdvancedBakingFrugalWorkspaceTalentGroup, AdvancedBakingLavishWorkspaceTalentGroup, AdvancedBakingParrallelProcessingTalentGroup, AdvancedCampfireCookingFocusedWorkflowTalentGroup, AdvancedCampfireCookingFrugalWorkspaceTalentGroup, AdvancedCampfireCookingLavishWorkspaceTalentGroup, AdvancedCampfireCookingParrallelProcessingTalentGroup, AdvancedCookingFocusedWorkflowTalentGroup, AdvancedCookingFrugalWorkspaceTalentGroup, AdvancedCookingLavishWorkspaceTalentGroup, AdvancedCookingParrallelProcessingTalentGroup, AdvancedSmeltingFocusedWorkflowTalentGroup, AdvancedSmeltingFrugalWorkspaceTalentGroup, AdvancedSmeltingLavishWorkspaceTalentGroup, AdvancedSmeltingParrallelProcessingTalentGroup, BakingFocusedWorkflowTalentGroup, BakingFrugalWorkspaceTalentGroup, BakingLavishWorkspaceTalentGroup, BakingParrallelProcessingTalentGroup, BasicEngineeringFocusedWorkflowTalentGroup, BasicEngineeringFrugalWorkspaceTalentGroup, BasicEngineeringLavishWorkspaceTalentGroup, BasicEngineeringParrallelProcessingTalentGroup, BricklayingFocusedWorkflowTalentGroup, BricklayingFrugalWorkspaceTalentGroup, BricklayingLavishWorkspaceTalentGroup, BricklayingParrallelProcessingTalentGroup, ButcheryFocusedWorkflowTalentGroup, ButcheryFrugalWorkspaceTalentGroup, ButcheryLavishWorkspaceTalentGroup, ButcheryParrallelProcessingTalentGroup, CampfireFocusedWorkflowTalentGroup, CampfireFrugalWorkspaceTalentGroup, CampfireLavishWorkspaceTalentGroup, CampfireParrallelProcessingTalentGroup, CementFocusedWorkflowTalentGroup, CementFrugalWorkspaceTalentGroup, CementLavishWorkspaceTalentGroup, CementParrallelProcessingTalentGroup, CookingFocusedWorkflowTalentGroup, CookingFrugalWorkspaceTalentGroup, CookingLavishWorkspaceTalentGroup, CookingParrallelProcessingTalentGroup, CuttingEdgeCookingFocusedWorkflowTalentGroup, CuttingEdgeCookingFrugalWorkspaceTalentGroup, CuttingEdgeCookingLavishWorkspaceTalentGroup, CuttingEdgeCookingParrallelProcessingTalentGroup, ElectronicsFocusedWorkflowTalentGroup, ElectronicsFrugalWorkspaceTalentGroup, ElectronicsLavishWorkspaceTalentGroup, ElectronicsParrallelProcessingTalentGroup, FertilizersFocusedWorkflowTalentGroup, FertilizersFrugalWorkspaceTalentGroup, FertilizersLavishWorkspaceTalentGroup, FertilizersParrallelProcessingTalentGroup, GatheringExperiencedFarmhandTalentGroup, GatheringNaturalGathererTalentGroup, GatheringToolEfficiencyTalentGroup, GatheringToolStrengthTalentGroup, GlassworkingFocusedWorkflowTalentGroup, GlassworkingFrugalWorkspaceTalentGroup, GlassworkingLavishWorkspaceTalentGroup, GlassworkingParrallelProcessingTalentGroup, HewingFocusedWorkflowTalentGroup, HewingFrugalWorkspaceTalentGroup, HewingLavishWorkspaceTalentGroup, HewingParrallelProcessingTalentGroup, HuntingAggressiveAnglerTalentGroup, HuntingArrowRecoveryTalentGroup, HuntingDeadeyeTalentGroup, HuntingFishermanTalentGroup, IndustryFocusedWorkflowTalentGroup, IndustryFrugalWorkspaceTalentGroup, IndustryLavishWorkspaceTalentGroup, IndustryParrallelProcessingTalentGroup, LoggingCleanupCrewTalentGroup, LoggingLoggersLuckTalentGroup, LoggingToolEfficiencyTalentGroup, LoggingToolStrengthTalentGroup, LumberFocusedWorkflowTalentGroup, LumberFrugalWorkspaceTalentGroup, LumberLavishWorkspaceTalentGroup, LumberParrallelProcessingTalentGroup, MechanicsFocusedWorkflowTalentGroup, MechanicsFrugalWorkspaceTalentGroup, MechanicsLavishWorkspaceTalentGroup, MechanicsParrallelProcessingTalentGroup, MetalConstructionFocusedWorkflowTalentGroup, MetalConstructionFrugalWorkspaceTalentGroup, MetalConstructionLavishWorkspaceTalentGroup, MetalConstructionParrallelProcessingTalentGroup, MillingFocusedWorkflowTalentGroup, MillingFrugalWorkspaceTalentGroup, MillingLavishWorkspaceTalentGroup, MillingParrallelProcessingTalentGroup, MiningLuckyBreakTalentGroup, MiningSweepingHandsTalentGroup, MiningToolEfficiencyTalentGroup, MiningToolStrengthTalentGroup, MortaringFocusedWorkflowTalentGroup, MortaringFrugalWorkspaceTalentGroup, MortaringLavishWorkspaceTalentGroup, MortaringParrallelProcessingTalentGroup, OilDrillingFocusedWorkflowTalentGroup, OilDrillingFrugalWorkspaceTalentGroup, OilDrillingLavishWorkspaceTalentGroup, OilDrillingParrallelProcessingTalentGroup, PaperMillingFocusedWorkflowTalentGroup, PaperMillingFrugalWorkspaceTalentGroup, PaperMillingLavishWorkspaceTalentGroup, PaperMillingParrallelProcessingTalentGroup, SelfImprovementDiverTalentGroup, SelfImprovementNatureAdventurerTalentGroup, SelfImprovementSprinterTalentGroup, SelfImprovementUrbanTravellerTalentGroup, SmeltingFocusedWorkflowTalentGroup, SmeltingFrugalWorkspaceTalentGroup, SmeltingLavishWorkspaceTalentGroup, SmeltingParrallelProcessingTalentGroup, TailoringFocusedWorkflowTalentGroup, TailoringFrugalWorkspaceTalentGroup, TailoringLavishWorkspaceTalentGroup, TailoringParrallelProcessingTalentGroup
priority
missing icons missing following icons advancedbakingfocusedworkflowtalentgroup advancedbakingfrugalworkspacetalentgroup advancedbakinglavishworkspacetalentgroup advancedbakingparrallelprocessingtalentgroup advancedcampfirecookingfocusedworkflowtalentgroup advancedcampfirecookingfrugalworkspacetalentgroup advancedcampfirecookinglavishworkspacetalentgroup advancedcampfirecookingparrallelprocessingtalentgroup advancedcookingfocusedworkflowtalentgroup advancedcookingfrugalworkspacetalentgroup advancedcookinglavishworkspacetalentgroup advancedcookingparrallelprocessingtalentgroup advancedsmeltingfocusedworkflowtalentgroup advancedsmeltingfrugalworkspacetalentgroup advancedsmeltinglavishworkspacetalentgroup advancedsmeltingparrallelprocessingtalentgroup bakingfocusedworkflowtalentgroup bakingfrugalworkspacetalentgroup bakinglavishworkspacetalentgroup bakingparrallelprocessingtalentgroup basicengineeringfocusedworkflowtalentgroup basicengineeringfrugalworkspacetalentgroup basicengineeringlavishworkspacetalentgroup basicengineeringparrallelprocessingtalentgroup bricklayingfocusedworkflowtalentgroup bricklayingfrugalworkspacetalentgroup bricklayinglavishworkspacetalentgroup bricklayingparrallelprocessingtalentgroup butcheryfocusedworkflowtalentgroup butcheryfrugalworkspacetalentgroup butcherylavishworkspacetalentgroup butcheryparrallelprocessingtalentgroup campfirefocusedworkflowtalentgroup campfirefrugalworkspacetalentgroup campfirelavishworkspacetalentgroup campfireparrallelprocessingtalentgroup cementfocusedworkflowtalentgroup cementfrugalworkspacetalentgroup cementlavishworkspacetalentgroup cementparrallelprocessingtalentgroup cookingfocusedworkflowtalentgroup cookingfrugalworkspacetalentgroup cookinglavishworkspacetalentgroup cookingparrallelprocessingtalentgroup cuttingedgecookingfocusedworkflowtalentgroup cuttingedgecookingfrugalworkspacetalentgroup cuttingedgecookinglavishworkspacetalentgroup cuttingedgecookingparrallelprocessingtalentgroup electronicsfocusedworkflowtalentgroup electronicsfrugalworkspacetalentgroup electronicslavishworkspacetalentgroup electronicsparrallelprocessingtalentgroup fertilizersfocusedworkflowtalentgroup fertilizersfrugalworkspacetalentgroup fertilizerslavishworkspacetalentgroup fertilizersparrallelprocessingtalentgroup gatheringexperiencedfarmhandtalentgroup gatheringnaturalgatherertalentgroup gatheringtoolefficiencytalentgroup gatheringtoolstrengthtalentgroup glassworkingfocusedworkflowtalentgroup glassworkingfrugalworkspacetalentgroup glassworkinglavishworkspacetalentgroup glassworkingparrallelprocessingtalentgroup hewingfocusedworkflowtalentgroup hewingfrugalworkspacetalentgroup hewinglavishworkspacetalentgroup hewingparrallelprocessingtalentgroup huntingaggressiveanglertalentgroup huntingarrowrecoverytalentgroup huntingdeadeyetalentgroup huntingfishermantalentgroup industryfocusedworkflowtalentgroup industryfrugalworkspacetalentgroup industrylavishworkspacetalentgroup industryparrallelprocessingtalentgroup loggingcleanupcrewtalentgroup loggingloggerslucktalentgroup loggingtoolefficiencytalentgroup loggingtoolstrengthtalentgroup lumberfocusedworkflowtalentgroup lumberfrugalworkspacetalentgroup lumberlavishworkspacetalentgroup lumberparrallelprocessingtalentgroup mechanicsfocusedworkflowtalentgroup mechanicsfrugalworkspacetalentgroup mechanicslavishworkspacetalentgroup mechanicsparrallelprocessingtalentgroup metalconstructionfocusedworkflowtalentgroup metalconstructionfrugalworkspacetalentgroup metalconstructionlavishworkspacetalentgroup metalconstructionparrallelprocessingtalentgroup millingfocusedworkflowtalentgroup millingfrugalworkspacetalentgroup millinglavishworkspacetalentgroup millingparrallelprocessingtalentgroup miningluckybreaktalentgroup miningsweepinghandstalentgroup miningtoolefficiencytalentgroup miningtoolstrengthtalentgroup mortaringfocusedworkflowtalentgroup mortaringfrugalworkspacetalentgroup mortaringlavishworkspacetalentgroup mortaringparrallelprocessingtalentgroup oildrillingfocusedworkflowtalentgroup oildrillingfrugalworkspacetalentgroup oildrillinglavishworkspacetalentgroup oildrillingparrallelprocessingtalentgroup papermillingfocusedworkflowtalentgroup papermillingfrugalworkspacetalentgroup papermillinglavishworkspacetalentgroup papermillingparrallelprocessingtalentgroup selfimprovementdivertalentgroup selfimprovementnatureadventurertalentgroup selfimprovementsprintertalentgroup selfimprovementurbantravellertalentgroup smeltingfocusedworkflowtalentgroup smeltingfrugalworkspacetalentgroup smeltinglavishworkspacetalentgroup smeltingparrallelprocessingtalentgroup tailoringfocusedworkflowtalentgroup tailoringfrugalworkspacetalentgroup tailoringlavishworkspacetalentgroup tailoringparrallelprocessingtalentgroup
1
486,985
14,017,446,544
IssuesEvent
2020-10-29 15:39:54
GQCG/GQCP
https://api.github.com/repos/GQCG/GQCP
opened
Enable the evaluation of `RSQOperators` in a full spin-resolved ONV basis
C++ complexity: intermediate priority: high refactor
In recent refactors (#688), we had to temporarily disable some of the CI functionality. This issue tracks the re-enabling of the API to evaluate restricted operators in a full spin-resolved ONV basis.
1.0
Enable the evaluation of `RSQOperators` in a full spin-resolved ONV basis - In recent refactors (#688), we had to temporarily disable some of the CI functionality. This issue tracks the re-enabling of the API to evaluate restricted operators in a full spin-resolved ONV basis.
priority
enable the evaluation of rsqoperators in a full spin resolved onv basis in recent refactors we had to temporarily disable some of the ci functionality this issue tracks the re enabling of the api to evaluate restricted operators in a full spin resolved onv basis
1
483,339
13,923,065,141
IssuesEvent
2020-10-21 14:00:37
tomav/docker-mailserver
https://api.github.com/repos/tomav/docker-mailserver
closed
dovecot horked on {7.1.0,latest} versions.
bug postfix / dovecot related priority 1 [HIGH] script related
Horked is a technical termed for broken :) Seriously though -- Dovecot configuration works appropriately up to version `7.0.1` and before. Upgrading to the `latest` or `release-v7.1.0` image results in dovecot not starting on docker-mailserver, and consequently rejecting IMAP connections. Using the same configuration on images up to 7.0.1 works appropriately. **Expectations**: A working configuration from 7.0.1 should work in 7.0.1+, unless explicit changes are called out in the [Announcements section](https://github.com/tomav/docker-mailserver#announcements). ## Context Pretty standard postfix/dovecot setup using imap/ssl w/ letsencrypt, dmarc, dkim, etc. docker-compose.yml (redacted and paths simplified): ```yaml version: "3" networks: mail: driver: bridge ipam: config: - subnet: {MAIL NET}/24 db_db: external: true services: mail: image: tvial/docker-mailserver:release-v7.0.1 restart: "always" stop_grace_period: "1m" networks: mail: ipv4_address: {MAIL IP} ports: - "25:25/tcp" - "587:587/tcp" - "993:993/tcp" hostname: "mail" domainname: "{REDACTED}" container_name: "mail" environment: - "DEFAULT_RELAY_HOST=''" - "DMS_DEBUG=1" - "DOVECOT_MAILBOX_FORMAT=maildir" - "ENABLE_CLAMAV=0" - "ENABLE_ELK_FORWARDER=0" - "ENABLE_FAIL2BAN=0" - "ENABLE_FETCHMAIL=0" - "ENABLE_LDAP=''" - "ENABLE_MANAGESIEVE=1" - "ENABLE_POP3=''" - "ENABLE_POSTFIX_VIRTUAL_TRANSPORT=''" - "ENABLE_POSTGREY=1" - "ENABLE_QUOTAS=0" - "ENABLE_SASLAUTHD=0" - "ENABLE_SPAMASSASSIN=1" - "ENABLE_SRS=1" - "LOGROTATE_INTERVAL=weekly" - "LOGWATCH_INTERVAL=weekly" - "ONE_DIR=1" - "PERMIT_DOCKER=host" - "PFLOGSUMM_TRIGGER=logrotate" - "POSTFIX_DAGENT=''" - "POSTFIX_INET_PROTOCOLS=ipv4" - "POSTFIX_MAILBOX_SIZE_LIMIT=0" - "POSTFIX_MESSAGE_SIZE_LIMIT=10480000" - "POSTGREY_AUTO_WHITELIST_CLIENTS=0" - "POSTGREY_DELAY=300" - "POSTGREY_MAX_AGE=35" - "POSTGREY_TEXT=Delayed by postgrey" - "POSTMASTER_ADDRESS=postmaster@{REDACTED}" - "POSTSCREEN_ACTION=enforce" - "RELAY_HOST=''" - "SA_KILL=6.31" - "SA_SPAM_SUBJECT=***SPAM***" - "SA_TAG2=6.31" - "SA_TAG=3.0" - "SASL_PASSWD=''" - "SASLAUTHD_MECH_OPTIONS=''" - "SASLAUTHD_MECHANISMS=''" - "SMTP_ONLY=''" - "SPOOF_PROTECTION=1" - "SRS_EXCLUDE_DOMAINS=''" - "SRS_SENDER_CLASSES=envelope_sender,header_sender" - "SSL_TYPE=letsencrypt" - "TLS_LEVEL=modern" - "TZ=America/Los_Angeles" - "VIRUSMAILS_DELETE_DELAY=7" volumes: - "/d/mail:/var/mail" - "/d/config:/tmp/docker-mailserver" - "/d/90-sieve.conf:/etc/dovecot/conf.d/90-sieve.conf" - "/d/letsencrypt:/etc/letsencrypt:ro" - "/var/log/docker/mail:/var/log/mail" - "/etc/localtime:/etc/localtime:ro" ``` IMAP/SSL works fine, can login without issue (and all versions before this). ```bash $ openssl s_client -starttls imap -connect mail.{REDACTED}:993 CONNECTED(00000005) ``` Dovecot is running on the server: ```bash # ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 1 23:05 ? 00:00:00 /usr/bin/python2 /usr/bin/supervisord -c /etc/supervisor/supervisord.conf root 8 1 0 23:05 ? 00:00:00 /bin/bash /usr/local/bin/start-mailserver.sh root 457 0 0 23:05 pts/0 00:00:00 /bin/sh root 525 1 0 23:05 ? 00:00:00 /usr/sbin/cron -f root 527 1 0 23:05 ? 00:00:00 /usr/sbin/rsyslogd -n root 533 1 0 23:05 ? 00:00:00 /usr/sbin/dovecot -F -c /etc/dovecot/dovecot.conf dovecot 536 533 0 23:05 ? 00:00:00 dovecot/anvil root 537 533 0 23:05 ? 00:00:00 dovecot/log root 538 533 0 23:05 ? 00:00:00 dovecot/config opendkim 540 1 0 23:05 ? 00:00:00 /usr/sbin/opendkim -f opendkim 542 540 0 23:05 ? 00:00:00 /usr/sbin/opendkim -f opendma+ 548 1 0 23:05 ? 00:00:00 /usr/sbin/opendmarc -f -p inet:8893@localhost -P /var/run/opendmarc/opendmarc.pid postgrey 556 1 1 23:05 ? 00:00:00 postgrey --inet=127.0.0.1:10023 --syslog-facility=mail --delay=300 --max-age=35 --auto-whitelist-clients=0 --g root 558 1 0 23:05 ? 00:00:00 bash /usr/local/bin/postfix-wrapper.sh amavis 567 1 16 23:05 ? 00:00:01 /usr/sbin/amavisd-new (master) root 569 8 0 23:05 ? 00:00:00 tail -fn 0 /var/log/mail/mail.log postsrsd 661 1 0 23:05 ? 00:00:00 /usr/sbin/postsrsd -f 10001 -r 10002 -d {REDACTED} -s /etc/postsrsd.secret -a = -n 4 -N 4 -u postsrsd -p /var/r root 1206 1 0 23:05 ? 00:00:00 /usr/lib/postfix/sbin/master postfix 1208 1206 0 23:05 ? 00:00:00 pickup -l -t fifo -u -c -o content_filter= -o receive_override_options=no_header_body_checks postfix 1209 1206 0 23:05 ? 00:00:00 qmgr -l -t unix -u amavis 1210 567 0 23:05 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) amavis 1211 567 0 23:05 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) root 1214 558 0 23:05 ? 00:00:00 sleep 5 root 1215 457 0 23:05 pts/0 00:00:00 ps -ef ``` ### *What* is affected by this bug? Upgrading beyond `7.0.1` without any changes causes dovecot **not** to run, and therefore, IMAP/SSL to fail. docker-compose.yml (same as above, just version bump): ```yaml ... services: mail: image: tvial/docker-mailserver:release-v7.1.0 ... ``` IMAP/SSL fails. ```bash $ openssl s_client -starttls imap -connect mail.{REDACTED}:993 140671805964736:error:0200206F:system library:connect:Connection refused:../crypto/bio/b_sock2.c:110: 140671805964736:error:2008A067:BIO routines:BIO_connect:connect error:../crypto/bio/b_sock2.c:111: connect:errno=111 ``` Dovecot is **not** running on the server: ```bash # ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 2 23:03 ? 00:00:00 /usr/bin/python2 /usr/bin/supervisord -c /etc/supervisor/supervisord.conf root 8 1 0 23:03 ? 00:00:00 /bin/bash /usr/local/bin/start-mailserver.sh root 552 1 0 23:03 ? 00:00:00 /usr/sbin/cron -f root 554 1 0 23:03 ? 00:00:00 /usr/sbin/rsyslogd -n opendkim 558 1 0 23:03 ? 00:00:00 /usr/sbin/opendkim -f opendkim 560 558 0 23:03 ? 00:00:00 /usr/sbin/opendkim -f opendma+ 566 1 0 23:03 ? 00:00:00 /usr/sbin/opendmarc -f -p inet:8893@localhost -P /var/run/opendmarc/opendmarc.pid postgrey 574 1 3 23:03 ? 00:00:00 postgrey --inet=127.0.0.1:10023 --syslog-facility=mail --delay=300 --max-age=35 --auto-whitelist-clients=0 --g root 576 1 0 23:03 ? 00:00:00 bash /usr/local/bin/postfix-wrapper.sh amavis 585 1 44 23:03 ? 00:00:01 /usr/sbin/amavisd-new (master) root 587 8 0 23:03 ? 00:00:00 tail -fn 0 /var/log/mail/mail.log root 596 0 1 23:03 pts/0 00:00:00 /bin/sh postsrsd 626 1 0 23:03 ? 00:00:00 /usr/sbin/postsrsd -f 10001 -r 10002 -d {REDACTED} -s /etc/postsrsd.secret -a = -n 4 -N 4 -u postsrsd -p /var/r amavis 1141 585 0 23:04 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) amavis 1142 585 0 23:04 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) root 1225 1 0 23:04 ? 00:00:00 /usr/lib/postfix/sbin/master root 1226 576 0 23:04 ? 00:00:00 sleep 5 postfix 1227 1225 0 23:04 ? 00:00:00 pickup -l -t fifo -u -c -o content_filter= -o receive_override_options=no_header_body_checks postfix 1228 1225 0 23:04 ? 00:00:00 qmgr -l -t unix -u root 1229 596 0 23:04 pts/0 00:00:00 ps -ef ``` roundcube (web UI using IMAP/SSL) is not happy either: /var/log/syslog: ```bash Oct 12 20:18:47 {SERVER} roundcube[7483]: errors: <eb34b0a6> IMAP Error: Login failed for {USER} against {REDACTED} from {GATEWAY IP}(X-Real-IP: {CLIENT IP},X-Forwarded-For: {CLIENT IP}). Could not connect to ssl://{REDACTED}:993: Unknown reason in /var/www/html/program/lib/Roundcube/rcube_imap.php on line 200 (POST /?_task=login&_action=login) ``` ### *When* does this occur? Any version beyond `7.0.1` ## *How* do we replicate the issue? See above for replication. I can provide additional files for postfix conf and dovecot sieve, but that doesn't seem to be the issue. ## Actual Behavior IMAP/SSL logins fail. Dovecot not running on the server. ## Expected behavior (i.e. solution) **Expectations**: A working configuration from 7.0.1 should work in 7.0.1+, unless explicit changes are called out in the [Announcements section](https://github.com/tomav/docker-mailserver#announcements). ## Your Environment * Amount of RAM available: 128GB * Mailserver version used: * working: release-v7.0.0, release-v7.0.1 * non-working: release-v7.1.0, latest * Docker version used: Docker version 19.03.13, build 4484c46d9d * Environment settings relevant to the config: See config. * Any relevant stack traces ("Full trace" preferred): <!--- Please remember to format code using triple backticks (`) so that it is neatly formatted when the issue is posted. Spoilers are recommended for readability: <details> <summary>Click me to expand </summary> ```sh echo "hello world" ``` </details> -->
1.0
dovecot horked on {7.1.0,latest} versions. - Horked is a technical termed for broken :) Seriously though -- Dovecot configuration works appropriately up to version `7.0.1` and before. Upgrading to the `latest` or `release-v7.1.0` image results in dovecot not starting on docker-mailserver, and consequently rejecting IMAP connections. Using the same configuration on images up to 7.0.1 works appropriately. **Expectations**: A working configuration from 7.0.1 should work in 7.0.1+, unless explicit changes are called out in the [Announcements section](https://github.com/tomav/docker-mailserver#announcements). ## Context Pretty standard postfix/dovecot setup using imap/ssl w/ letsencrypt, dmarc, dkim, etc. docker-compose.yml (redacted and paths simplified): ```yaml version: "3" networks: mail: driver: bridge ipam: config: - subnet: {MAIL NET}/24 db_db: external: true services: mail: image: tvial/docker-mailserver:release-v7.0.1 restart: "always" stop_grace_period: "1m" networks: mail: ipv4_address: {MAIL IP} ports: - "25:25/tcp" - "587:587/tcp" - "993:993/tcp" hostname: "mail" domainname: "{REDACTED}" container_name: "mail" environment: - "DEFAULT_RELAY_HOST=''" - "DMS_DEBUG=1" - "DOVECOT_MAILBOX_FORMAT=maildir" - "ENABLE_CLAMAV=0" - "ENABLE_ELK_FORWARDER=0" - "ENABLE_FAIL2BAN=0" - "ENABLE_FETCHMAIL=0" - "ENABLE_LDAP=''" - "ENABLE_MANAGESIEVE=1" - "ENABLE_POP3=''" - "ENABLE_POSTFIX_VIRTUAL_TRANSPORT=''" - "ENABLE_POSTGREY=1" - "ENABLE_QUOTAS=0" - "ENABLE_SASLAUTHD=0" - "ENABLE_SPAMASSASSIN=1" - "ENABLE_SRS=1" - "LOGROTATE_INTERVAL=weekly" - "LOGWATCH_INTERVAL=weekly" - "ONE_DIR=1" - "PERMIT_DOCKER=host" - "PFLOGSUMM_TRIGGER=logrotate" - "POSTFIX_DAGENT=''" - "POSTFIX_INET_PROTOCOLS=ipv4" - "POSTFIX_MAILBOX_SIZE_LIMIT=0" - "POSTFIX_MESSAGE_SIZE_LIMIT=10480000" - "POSTGREY_AUTO_WHITELIST_CLIENTS=0" - "POSTGREY_DELAY=300" - "POSTGREY_MAX_AGE=35" - "POSTGREY_TEXT=Delayed by postgrey" - "POSTMASTER_ADDRESS=postmaster@{REDACTED}" - "POSTSCREEN_ACTION=enforce" - "RELAY_HOST=''" - "SA_KILL=6.31" - "SA_SPAM_SUBJECT=***SPAM***" - "SA_TAG2=6.31" - "SA_TAG=3.0" - "SASL_PASSWD=''" - "SASLAUTHD_MECH_OPTIONS=''" - "SASLAUTHD_MECHANISMS=''" - "SMTP_ONLY=''" - "SPOOF_PROTECTION=1" - "SRS_EXCLUDE_DOMAINS=''" - "SRS_SENDER_CLASSES=envelope_sender,header_sender" - "SSL_TYPE=letsencrypt" - "TLS_LEVEL=modern" - "TZ=America/Los_Angeles" - "VIRUSMAILS_DELETE_DELAY=7" volumes: - "/d/mail:/var/mail" - "/d/config:/tmp/docker-mailserver" - "/d/90-sieve.conf:/etc/dovecot/conf.d/90-sieve.conf" - "/d/letsencrypt:/etc/letsencrypt:ro" - "/var/log/docker/mail:/var/log/mail" - "/etc/localtime:/etc/localtime:ro" ``` IMAP/SSL works fine, can login without issue (and all versions before this). ```bash $ openssl s_client -starttls imap -connect mail.{REDACTED}:993 CONNECTED(00000005) ``` Dovecot is running on the server: ```bash # ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 1 23:05 ? 00:00:00 /usr/bin/python2 /usr/bin/supervisord -c /etc/supervisor/supervisord.conf root 8 1 0 23:05 ? 00:00:00 /bin/bash /usr/local/bin/start-mailserver.sh root 457 0 0 23:05 pts/0 00:00:00 /bin/sh root 525 1 0 23:05 ? 00:00:00 /usr/sbin/cron -f root 527 1 0 23:05 ? 00:00:00 /usr/sbin/rsyslogd -n root 533 1 0 23:05 ? 00:00:00 /usr/sbin/dovecot -F -c /etc/dovecot/dovecot.conf dovecot 536 533 0 23:05 ? 00:00:00 dovecot/anvil root 537 533 0 23:05 ? 00:00:00 dovecot/log root 538 533 0 23:05 ? 00:00:00 dovecot/config opendkim 540 1 0 23:05 ? 00:00:00 /usr/sbin/opendkim -f opendkim 542 540 0 23:05 ? 00:00:00 /usr/sbin/opendkim -f opendma+ 548 1 0 23:05 ? 00:00:00 /usr/sbin/opendmarc -f -p inet:8893@localhost -P /var/run/opendmarc/opendmarc.pid postgrey 556 1 1 23:05 ? 00:00:00 postgrey --inet=127.0.0.1:10023 --syslog-facility=mail --delay=300 --max-age=35 --auto-whitelist-clients=0 --g root 558 1 0 23:05 ? 00:00:00 bash /usr/local/bin/postfix-wrapper.sh amavis 567 1 16 23:05 ? 00:00:01 /usr/sbin/amavisd-new (master) root 569 8 0 23:05 ? 00:00:00 tail -fn 0 /var/log/mail/mail.log postsrsd 661 1 0 23:05 ? 00:00:00 /usr/sbin/postsrsd -f 10001 -r 10002 -d {REDACTED} -s /etc/postsrsd.secret -a = -n 4 -N 4 -u postsrsd -p /var/r root 1206 1 0 23:05 ? 00:00:00 /usr/lib/postfix/sbin/master postfix 1208 1206 0 23:05 ? 00:00:00 pickup -l -t fifo -u -c -o content_filter= -o receive_override_options=no_header_body_checks postfix 1209 1206 0 23:05 ? 00:00:00 qmgr -l -t unix -u amavis 1210 567 0 23:05 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) amavis 1211 567 0 23:05 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) root 1214 558 0 23:05 ? 00:00:00 sleep 5 root 1215 457 0 23:05 pts/0 00:00:00 ps -ef ``` ### *What* is affected by this bug? Upgrading beyond `7.0.1` without any changes causes dovecot **not** to run, and therefore, IMAP/SSL to fail. docker-compose.yml (same as above, just version bump): ```yaml ... services: mail: image: tvial/docker-mailserver:release-v7.1.0 ... ``` IMAP/SSL fails. ```bash $ openssl s_client -starttls imap -connect mail.{REDACTED}:993 140671805964736:error:0200206F:system library:connect:Connection refused:../crypto/bio/b_sock2.c:110: 140671805964736:error:2008A067:BIO routines:BIO_connect:connect error:../crypto/bio/b_sock2.c:111: connect:errno=111 ``` Dovecot is **not** running on the server: ```bash # ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 2 23:03 ? 00:00:00 /usr/bin/python2 /usr/bin/supervisord -c /etc/supervisor/supervisord.conf root 8 1 0 23:03 ? 00:00:00 /bin/bash /usr/local/bin/start-mailserver.sh root 552 1 0 23:03 ? 00:00:00 /usr/sbin/cron -f root 554 1 0 23:03 ? 00:00:00 /usr/sbin/rsyslogd -n opendkim 558 1 0 23:03 ? 00:00:00 /usr/sbin/opendkim -f opendkim 560 558 0 23:03 ? 00:00:00 /usr/sbin/opendkim -f opendma+ 566 1 0 23:03 ? 00:00:00 /usr/sbin/opendmarc -f -p inet:8893@localhost -P /var/run/opendmarc/opendmarc.pid postgrey 574 1 3 23:03 ? 00:00:00 postgrey --inet=127.0.0.1:10023 --syslog-facility=mail --delay=300 --max-age=35 --auto-whitelist-clients=0 --g root 576 1 0 23:03 ? 00:00:00 bash /usr/local/bin/postfix-wrapper.sh amavis 585 1 44 23:03 ? 00:00:01 /usr/sbin/amavisd-new (master) root 587 8 0 23:03 ? 00:00:00 tail -fn 0 /var/log/mail/mail.log root 596 0 1 23:03 pts/0 00:00:00 /bin/sh postsrsd 626 1 0 23:03 ? 00:00:00 /usr/sbin/postsrsd -f 10001 -r 10002 -d {REDACTED} -s /etc/postsrsd.secret -a = -n 4 -N 4 -u postsrsd -p /var/r amavis 1141 585 0 23:04 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) amavis 1142 585 0 23:04 ? 00:00:00 /usr/sbin/amavisd-new (virgin child) root 1225 1 0 23:04 ? 00:00:00 /usr/lib/postfix/sbin/master root 1226 576 0 23:04 ? 00:00:00 sleep 5 postfix 1227 1225 0 23:04 ? 00:00:00 pickup -l -t fifo -u -c -o content_filter= -o receive_override_options=no_header_body_checks postfix 1228 1225 0 23:04 ? 00:00:00 qmgr -l -t unix -u root 1229 596 0 23:04 pts/0 00:00:00 ps -ef ``` roundcube (web UI using IMAP/SSL) is not happy either: /var/log/syslog: ```bash Oct 12 20:18:47 {SERVER} roundcube[7483]: errors: <eb34b0a6> IMAP Error: Login failed for {USER} against {REDACTED} from {GATEWAY IP}(X-Real-IP: {CLIENT IP},X-Forwarded-For: {CLIENT IP}). Could not connect to ssl://{REDACTED}:993: Unknown reason in /var/www/html/program/lib/Roundcube/rcube_imap.php on line 200 (POST /?_task=login&_action=login) ``` ### *When* does this occur? Any version beyond `7.0.1` ## *How* do we replicate the issue? See above for replication. I can provide additional files for postfix conf and dovecot sieve, but that doesn't seem to be the issue. ## Actual Behavior IMAP/SSL logins fail. Dovecot not running on the server. ## Expected behavior (i.e. solution) **Expectations**: A working configuration from 7.0.1 should work in 7.0.1+, unless explicit changes are called out in the [Announcements section](https://github.com/tomav/docker-mailserver#announcements). ## Your Environment * Amount of RAM available: 128GB * Mailserver version used: * working: release-v7.0.0, release-v7.0.1 * non-working: release-v7.1.0, latest * Docker version used: Docker version 19.03.13, build 4484c46d9d * Environment settings relevant to the config: See config. * Any relevant stack traces ("Full trace" preferred): <!--- Please remember to format code using triple backticks (`) so that it is neatly formatted when the issue is posted. Spoilers are recommended for readability: <details> <summary>Click me to expand </summary> ```sh echo "hello world" ``` </details> -->
priority
dovecot horked on latest versions horked is a technical termed for broken seriously though dovecot configuration works appropriately up to version and before upgrading to the latest or release image results in dovecot not starting on docker mailserver and consequently rejecting imap connections using the same configuration on images up to works appropriately expectations a working configuration from should work in unless explicit changes are called out in the context pretty standard postfix dovecot setup using imap ssl w letsencrypt dmarc dkim etc docker compose yml redacted and paths simplified yaml version networks mail driver bridge ipam config subnet mail net db db external true services mail image tvial docker mailserver release restart always stop grace period networks mail address mail ip ports tcp tcp tcp hostname mail domainname redacted container name mail environment default relay host dms debug dovecot mailbox format maildir enable clamav enable elk forwarder enable enable fetchmail enable ldap enable managesieve enable enable postfix virtual transport enable postgrey enable quotas enable saslauthd enable spamassassin enable srs logrotate interval weekly logwatch interval weekly one dir permit docker host pflogsumm trigger logrotate postfix dagent postfix inet protocols postfix mailbox size limit postfix message size limit postgrey auto whitelist clients postgrey delay postgrey max age postgrey text delayed by postgrey postmaster address postmaster redacted postscreen action enforce relay host sa kill sa spam subject spam sa sa tag sasl passwd saslauthd mech options saslauthd mechanisms smtp only spoof protection srs exclude domains srs sender classes envelope sender header sender ssl type letsencrypt tls level modern tz america los angeles virusmails delete delay volumes d mail var mail d config tmp docker mailserver d sieve conf etc dovecot conf d sieve conf d letsencrypt etc letsencrypt ro var log docker mail var log mail etc localtime etc localtime ro imap ssl works fine can login without issue and all versions before this bash openssl s client starttls imap connect mail redacted connected dovecot is running on the server bash ps ef uid pid ppid c stime tty time cmd root usr bin usr bin supervisord c etc supervisor supervisord conf root bin bash usr local bin start mailserver sh root pts bin sh root usr sbin cron f root usr sbin rsyslogd n root usr sbin dovecot f c etc dovecot dovecot conf dovecot dovecot anvil root dovecot log root dovecot config opendkim usr sbin opendkim f opendkim usr sbin opendkim f opendma usr sbin opendmarc f p inet localhost p var run opendmarc opendmarc pid postgrey postgrey inet syslog facility mail delay max age auto whitelist clients g root bash usr local bin postfix wrapper sh amavis usr sbin amavisd new master root tail fn var log mail mail log postsrsd usr sbin postsrsd f r d redacted s etc postsrsd secret a n n u postsrsd p var r root usr lib postfix sbin master postfix pickup l t fifo u c o content filter o receive override options no header body checks postfix qmgr l t unix u amavis usr sbin amavisd new virgin child amavis usr sbin amavisd new virgin child root sleep root pts ps ef what is affected by this bug upgrading beyond without any changes causes dovecot not to run and therefore imap ssl to fail docker compose yml same as above just version bump yaml services mail image tvial docker mailserver release imap ssl fails bash openssl s client starttls imap connect mail redacted error system library connect connection refused crypto bio b c error bio routines bio connect connect error crypto bio b c connect errno dovecot is not running on the server bash ps ef uid pid ppid c stime tty time cmd root usr bin usr bin supervisord c etc supervisor supervisord conf root bin bash usr local bin start mailserver sh root usr sbin cron f root usr sbin rsyslogd n opendkim usr sbin opendkim f opendkim usr sbin opendkim f opendma usr sbin opendmarc f p inet localhost p var run opendmarc opendmarc pid postgrey postgrey inet syslog facility mail delay max age auto whitelist clients g root bash usr local bin postfix wrapper sh amavis usr sbin amavisd new master root tail fn var log mail mail log root pts bin sh postsrsd usr sbin postsrsd f r d redacted s etc postsrsd secret a n n u postsrsd p var r amavis usr sbin amavisd new virgin child amavis usr sbin amavisd new virgin child root usr lib postfix sbin master root sleep postfix pickup l t fifo u c o content filter o receive override options no header body checks postfix qmgr l t unix u root pts ps ef roundcube web ui using imap ssl is not happy either var log syslog bash oct server roundcube errors imap error login failed for user against redacted from gateway ip x real ip client ip x forwarded for client ip could not connect to ssl redacted unknown reason in var www html program lib roundcube rcube imap php on line post task login action login when does this occur any version beyond how do we replicate the issue see above for replication i can provide additional files for postfix conf and dovecot sieve but that doesn t seem to be the issue actual behavior imap ssl logins fail dovecot not running on the server expected behavior i e solution expectations a working configuration from should work in unless explicit changes are called out in the your environment amount of ram available mailserver version used working release release non working release latest docker version used docker version build environment settings relevant to the config see config any relevant stack traces full trace preferred please remember to format code using triple backticks so that it is neatly formatted when the issue is posted spoilers are recommended for readability click me to expand sh echo hello world
1
202,049
7,043,566,956
IssuesEvent
2017-12-31 08:42:25
commons-app/apps-android-commons
https://api.github.com/repos/commons-app/apps-android-commons
opened
[Urgent] Crashes in v2.6.5
bug high priority
We are getting a lot of crashes in v2.6.5 production, with a crash rate of 4.27%, more than in previous versions. This has unfortunately led to a lot of uninstalls. :( The majority of the crashes seem to be related to Dagger. I have pasted them below: 9 reports, 6 users: ``` java.lang.RuntimeException: at android.app.ActivityThread.handleCreateService (ActivityThread.java:3349) at android.app.ActivityThread.-wrap4 (Unknown Source) at android.app.ActivityThread$H.handleMessage (ActivityThread.java:1677) at android.os.Handler.dispatchMessage (Handler.java:106) at android.os.Looper.loop (Looper.java:164) at android.app.ActivityThread.main (ActivityThread.java:6494) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:438) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:807) Caused by: java.lang.RuntimeException: at dagger.android.AndroidInjection.inject (AndroidInjection.java:134) at dagger.android.DaggerService.onCreate (DaggerService.java:27) at fr.free.nrw.commons.HandlerService.onCreate (HandlerService.java:55) at fr.free.nrw.commons.upload.UploadService.onCreate (UploadService.java:122) at android.app.ActivityThread.handleCreateService (ActivityThread.java:3339) ``` 8 reports, 4 users: ``` java.lang.RuntimeException: at android.app.ActivityThread.handleCreateService (ActivityThread.java:3544) at android.app.ActivityThread.-wrap6 (ActivityThread.java) at android.app.ActivityThread$H.handleMessage (ActivityThread.java:1732) at android.os.Handler.dispatchMessage (Handler.java:102) at android.os.Looper.loop (Looper.java:154) at android.app.ActivityThread.main (ActivityThread.java:6776) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:1496) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1386) Caused by: java.lang.RuntimeException: at dagger.android.AndroidInjection.inject (AndroidInjection.java:134) at dagger.android.DaggerService.onCreate (DaggerService.java:27) at fr.free.nrw.commons.HandlerService.onCreate (HandlerService.java:55) at fr.free.nrw.commons.upload.UploadService.onCreate (UploadService.java:122) at android.app.ActivityThread.handleCreateService (ActivityThread.java:3534) ``` 5 reports, 2 users: ``` java.lang.RuntimeException: at fr.free.nrw.commons.contributions.ContributionsActivity$1.onServiceDisconnected (ContributionsActivity.java:97) at android.app.LoadedApk$ServiceDispatcher.doConnected (LoadedApk.java:1457) at android.app.LoadedApk$ServiceDispatcher$RunConnection.run (LoadedApk.java:1489) at android.os.Handler.handleCallback (Handler.java:754) at android.os.Handler.dispatchMessage (Handler.java:95) at android.os.Looper.loop (Looper.java:163) at android.app.ActivityThread.main (ActivityThread.java:6361) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:904) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:794) ``` 8 reports, 3 users: ``` java.lang.RuntimeException: at android.app.ActivityThread.handleCreateService (ActivityThread.java:3193) at android.app.ActivityThread.-wrap5 (ActivityThread.java) at android.app.ActivityThread$H.handleMessage (ActivityThread.java:1563) at android.os.Handler.dispatchMessage (Handler.java:102) at android.os.Looper.loop (Looper.java:154) at android.app.ActivityThread.main (ActivityThread.java:6123) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:867) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:757) Caused by: java.lang.RuntimeException: at dagger.android.AndroidInjection.inject (AndroidInjection.java:134) at dagger.android.DaggerService.onCreate (DaggerService.java:27) at fr.free.nrw.commons.HandlerService.onCreate (HandlerService.java:55) at fr.free.nrw.commons.upload.UploadService.onCreate (UploadService.java:122) at android.app.ActivityThread.handleCreateService (ActivityThread.java:3183) ```
1.0
[Urgent] Crashes in v2.6.5 - We are getting a lot of crashes in v2.6.5 production, with a crash rate of 4.27%, more than in previous versions. This has unfortunately led to a lot of uninstalls. :( The majority of the crashes seem to be related to Dagger. I have pasted them below: 9 reports, 6 users: ``` java.lang.RuntimeException: at android.app.ActivityThread.handleCreateService (ActivityThread.java:3349) at android.app.ActivityThread.-wrap4 (Unknown Source) at android.app.ActivityThread$H.handleMessage (ActivityThread.java:1677) at android.os.Handler.dispatchMessage (Handler.java:106) at android.os.Looper.loop (Looper.java:164) at android.app.ActivityThread.main (ActivityThread.java:6494) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:438) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:807) Caused by: java.lang.RuntimeException: at dagger.android.AndroidInjection.inject (AndroidInjection.java:134) at dagger.android.DaggerService.onCreate (DaggerService.java:27) at fr.free.nrw.commons.HandlerService.onCreate (HandlerService.java:55) at fr.free.nrw.commons.upload.UploadService.onCreate (UploadService.java:122) at android.app.ActivityThread.handleCreateService (ActivityThread.java:3339) ``` 8 reports, 4 users: ``` java.lang.RuntimeException: at android.app.ActivityThread.handleCreateService (ActivityThread.java:3544) at android.app.ActivityThread.-wrap6 (ActivityThread.java) at android.app.ActivityThread$H.handleMessage (ActivityThread.java:1732) at android.os.Handler.dispatchMessage (Handler.java:102) at android.os.Looper.loop (Looper.java:154) at android.app.ActivityThread.main (ActivityThread.java:6776) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:1496) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1386) Caused by: java.lang.RuntimeException: at dagger.android.AndroidInjection.inject (AndroidInjection.java:134) at dagger.android.DaggerService.onCreate (DaggerService.java:27) at fr.free.nrw.commons.HandlerService.onCreate (HandlerService.java:55) at fr.free.nrw.commons.upload.UploadService.onCreate (UploadService.java:122) at android.app.ActivityThread.handleCreateService (ActivityThread.java:3534) ``` 5 reports, 2 users: ``` java.lang.RuntimeException: at fr.free.nrw.commons.contributions.ContributionsActivity$1.onServiceDisconnected (ContributionsActivity.java:97) at android.app.LoadedApk$ServiceDispatcher.doConnected (LoadedApk.java:1457) at android.app.LoadedApk$ServiceDispatcher$RunConnection.run (LoadedApk.java:1489) at android.os.Handler.handleCallback (Handler.java:754) at android.os.Handler.dispatchMessage (Handler.java:95) at android.os.Looper.loop (Looper.java:163) at android.app.ActivityThread.main (ActivityThread.java:6361) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:904) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:794) ``` 8 reports, 3 users: ``` java.lang.RuntimeException: at android.app.ActivityThread.handleCreateService (ActivityThread.java:3193) at android.app.ActivityThread.-wrap5 (ActivityThread.java) at android.app.ActivityThread$H.handleMessage (ActivityThread.java:1563) at android.os.Handler.dispatchMessage (Handler.java:102) at android.os.Looper.loop (Looper.java:154) at android.app.ActivityThread.main (ActivityThread.java:6123) at java.lang.reflect.Method.invoke (Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:867) at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:757) Caused by: java.lang.RuntimeException: at dagger.android.AndroidInjection.inject (AndroidInjection.java:134) at dagger.android.DaggerService.onCreate (DaggerService.java:27) at fr.free.nrw.commons.HandlerService.onCreate (HandlerService.java:55) at fr.free.nrw.commons.upload.UploadService.onCreate (UploadService.java:122) at android.app.ActivityThread.handleCreateService (ActivityThread.java:3183) ```
priority
crashes in we are getting a lot of crashes in production with a crash rate of more than in previous versions this has unfortunately led to a lot of uninstalls the majority of the crashes seem to be related to dagger i have pasted them below reports users java lang runtimeexception at android app activitythread handlecreateservice activitythread java at android app activitythread unknown source at android app activitythread h handlemessage activitythread java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java caused by java lang runtimeexception at dagger android androidinjection inject androidinjection java at dagger android daggerservice oncreate daggerservice java at fr free nrw commons handlerservice oncreate handlerservice java at fr free nrw commons upload uploadservice oncreate uploadservice java at android app activitythread handlecreateservice activitythread java reports users java lang runtimeexception at android app activitythread handlecreateservice activitythread java at android app activitythread activitythread java at android app activitythread h handlemessage activitythread java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java caused by java lang runtimeexception at dagger android androidinjection inject androidinjection java at dagger android daggerservice oncreate daggerservice java at fr free nrw commons handlerservice oncreate handlerservice java at fr free nrw commons upload uploadservice oncreate uploadservice java at android app activitythread handlecreateservice activitythread java reports users java lang runtimeexception at fr free nrw commons contributions contributionsactivity onservicedisconnected contributionsactivity java at android app loadedapk servicedispatcher doconnected loadedapk java at android app loadedapk servicedispatcher runconnection run loadedapk java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java reports users java lang runtimeexception at android app activitythread handlecreateservice activitythread java at android app activitythread activitythread java at android app activitythread h handlemessage activitythread java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java caused by java lang runtimeexception at dagger android androidinjection inject androidinjection java at dagger android daggerservice oncreate daggerservice java at fr free nrw commons handlerservice oncreate handlerservice java at fr free nrw commons upload uploadservice oncreate uploadservice java at android app activitythread handlecreateservice activitythread java
1
312,354
9,546,334,339
IssuesEvent
2019-05-01 19:38:16
processing/p5.js-web-editor
https://api.github.com/repos/processing/p5.js-web-editor
closed
userid is email when signed up with google
priority:high type:bug
This is likely a duplicate of #655 but I thought I would file separately since there are other bugs discussed in that issue. At creative coding fest, we noticed that if a user signs up with their google account, their username is their e-mail. This is fine for logging in, but it ends up in the path of their sketch which is a privacy concern. ``` https://editor.p5js.org/emailaddress@gmail.com/sketches/####### ``` Perhaps there is a way we can allow a user to set a custom "user id" when logging in with google. (Note when logging in with GitHub it defaults to the GitHub username.)
1.0
userid is email when signed up with google - This is likely a duplicate of #655 but I thought I would file separately since there are other bugs discussed in that issue. At creative coding fest, we noticed that if a user signs up with their google account, their username is their e-mail. This is fine for logging in, but it ends up in the path of their sketch which is a privacy concern. ``` https://editor.p5js.org/emailaddress@gmail.com/sketches/####### ``` Perhaps there is a way we can allow a user to set a custom "user id" when logging in with google. (Note when logging in with GitHub it defaults to the GitHub username.)
priority
userid is email when signed up with google this is likely a duplicate of but i thought i would file separately since there are other bugs discussed in that issue at creative coding fest we noticed that if a user signs up with their google account their username is their e mail this is fine for logging in but it ends up in the path of their sketch which is a privacy concern perhaps there is a way we can allow a user to set a custom user id when logging in with google note when logging in with github it defaults to the github username
1
688,757
23,596,002,553
IssuesEvent
2022-08-23 19:18:40
rstudio/gt
https://api.github.com/repos/rstudio/gt
closed
Multiple column input to aggregate functions in summary rows
Difficulty: [3] Advanced Effort: [3] High Priority: [3] High Type: ★ Enhancement
Aggregation functions only allows input from the column for which the summary value is for. Typical use case shown below where a volume weighted close price uses both close and volume as inputs to the aggregation function. ``` r suppressPackageStartupMessages(library(dplyr)) library(gt) sp500 %>% slice_head(n = 22) %>% select(date, close, volume) %>% gt() %>% summary_rows('close', groups = NULL, fns = list('Average close' = ~ mean(.))) %>% summary_rows('close', groups = NULL, fns = list('Average volume weighted close' = ~ sum(. * volume) / sum(volume))) #> Error in `summarise()`: #> ! Problem while computing `close = (function (x) ...`. #> ℹ The error occurred in group 1: ::group_id:: = "::GRAND_SUMMARY". #> Caused by error in `fn()`: #> ! object 'volume' not found ``` <sup>Created on 2022-05-26 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup> [Another example at stackoverflow with ratios](https://stackoverflow.com/questions/62541285/r-gt-summary-rows-ratio-total-row) I have cloned the project and changed 'dt_summary.r' to allow access to all columns to get the desired ouput as shown below. I have some missgivings about doing summaries inside gt and then extracting them. I would have prefered to separate data, including summaries, and format completely. Anyways, I can make a pull request for the summary change if desired. ``` r suppressPackageStartupMessages(library(dplyr)) library(gt_aggr) sp500 %>% slice_head(n = 22) %>% select(date, close, volume) %>% gt() %>% summary_rows('close', groups = NULL, fns = list('Average close' = ~ mean(.))) %>% summary_rows('close', groups = NULL, fns = list('Average volume weighted close' = ~ sum(. * volume) / sum(volume))) ``` <sup>Created on 2022-05-26 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup> <!DOCTYPE html> <html> <body> <div id="zkjmqivhos" style="overflow-x:auto;overflow-y:auto;width:auto;height:auto;"> <table class="gt_table"> <thead class="gt_col_headings"> <tr> <th class="gt_col_heading gt_columns_bottom_border gt_left" rowspan="1" colspan="1"></th> <th class="gt_col_heading gt_columns_bottom_border gt_left" rowspan="1" colspan="1">date</th> <th class="gt_col_heading gt_columns_bottom_border gt_right" rowspan="1" colspan="1">close</th> <th class="gt_col_heading gt_columns_bottom_border gt_right" rowspan="1" colspan="1">volume</th> </tr> </thead> <tbody class="gt_table_body"> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-31</td> <td class="gt_row gt_right">2043.94</td> <td class="gt_row gt_right">2655330000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-30</td> <td class="gt_row gt_right">2063.36</td> <td class="gt_row gt_right">2367430000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-29</td> <td class="gt_row gt_right">2078.36</td> <td class="gt_row gt_right">2542000000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-28</td> <td class="gt_row gt_right">2056.50</td> <td class="gt_row gt_right">2492510000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-24</td> <td class="gt_row gt_right">2060.99</td> <td class="gt_row gt_right">1411860000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-23</td> <td class="gt_row gt_right">2064.29</td> <td class="gt_row gt_right">3484090000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-22</td> <td class="gt_row gt_right">2038.97</td> <td class="gt_row gt_right">3520860000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-21</td> <td class="gt_row gt_right">2021.15</td> <td class="gt_row gt_right">3760280000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-18</td> <td class="gt_row gt_right">2005.55</td> <td class="gt_row gt_right">6683070000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-17</td> <td class="gt_row gt_right">2041.89</td> <td class="gt_row gt_right">4327390000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-16</td> <td class="gt_row gt_right">2073.07</td> <td class="gt_row gt_right">4635450000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-15</td> <td class="gt_row gt_right">2043.41</td> <td class="gt_row gt_right">4353540000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-14</td> <td class="gt_row gt_right">2021.94</td> <td class="gt_row gt_right">4612440000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-11</td> <td class="gt_row gt_right">2012.37</td> <td class="gt_row gt_right">4301060000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-10</td> <td class="gt_row gt_right">2052.23</td> <td class="gt_row gt_right">3715150000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-09</td> <td class="gt_row gt_right">2047.62</td> <td class="gt_row gt_right">4385250000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-08</td> <td class="gt_row gt_right">2063.59</td> <td class="gt_row gt_right">4173570000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-07</td> <td class="gt_row gt_right">2077.07</td> <td class="gt_row gt_right">4043820000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-04</td> <td class="gt_row gt_right">2091.69</td> <td class="gt_row gt_right">4214910000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-03</td> <td class="gt_row gt_right">2049.62</td> <td class="gt_row gt_right">4306490000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-02</td> <td class="gt_row gt_right">2079.51</td> <td class="gt_row gt_right">3950640000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-01</td> <td class="gt_row gt_right">2102.63</td> <td class="gt_row gt_right">3712120000</td></tr> <tr><td class="gt_row gt_right gt_stub gt_grand_summary_row gt_first_grand_summary_row">Average close</td> <td class="gt_row gt_left gt_grand_summary_row gt_first_grand_summary_row">&mdash;</td> <td class="gt_row gt_right gt_grand_summary_row gt_first_grand_summary_row">2,054.08</td> <td class="gt_row gt_right gt_grand_summary_row gt_first_grand_summary_row">&mdash;</td></tr> <tr><td class="gt_row gt_right gt_stub gt_grand_summary_row gt_last_summary_row">Average volume weighted close</td> <td class="gt_row gt_left gt_grand_summary_row gt_last_summary_row">&mdash;</td> <td class="gt_row gt_right gt_grand_summary_row gt_last_summary_row">2,051.51</td> <td class="gt_row gt_right gt_grand_summary_row gt_last_summary_row">&mdash;</td></tr> </tbody> </table> </div> </body> </html>
1.0
Multiple column input to aggregate functions in summary rows - Aggregation functions only allows input from the column for which the summary value is for. Typical use case shown below where a volume weighted close price uses both close and volume as inputs to the aggregation function. ``` r suppressPackageStartupMessages(library(dplyr)) library(gt) sp500 %>% slice_head(n = 22) %>% select(date, close, volume) %>% gt() %>% summary_rows('close', groups = NULL, fns = list('Average close' = ~ mean(.))) %>% summary_rows('close', groups = NULL, fns = list('Average volume weighted close' = ~ sum(. * volume) / sum(volume))) #> Error in `summarise()`: #> ! Problem while computing `close = (function (x) ...`. #> ℹ The error occurred in group 1: ::group_id:: = "::GRAND_SUMMARY". #> Caused by error in `fn()`: #> ! object 'volume' not found ``` <sup>Created on 2022-05-26 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup> [Another example at stackoverflow with ratios](https://stackoverflow.com/questions/62541285/r-gt-summary-rows-ratio-total-row) I have cloned the project and changed 'dt_summary.r' to allow access to all columns to get the desired ouput as shown below. I have some missgivings about doing summaries inside gt and then extracting them. I would have prefered to separate data, including summaries, and format completely. Anyways, I can make a pull request for the summary change if desired. ``` r suppressPackageStartupMessages(library(dplyr)) library(gt_aggr) sp500 %>% slice_head(n = 22) %>% select(date, close, volume) %>% gt() %>% summary_rows('close', groups = NULL, fns = list('Average close' = ~ mean(.))) %>% summary_rows('close', groups = NULL, fns = list('Average volume weighted close' = ~ sum(. * volume) / sum(volume))) ``` <sup>Created on 2022-05-26 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup> <!DOCTYPE html> <html> <body> <div id="zkjmqivhos" style="overflow-x:auto;overflow-y:auto;width:auto;height:auto;"> <table class="gt_table"> <thead class="gt_col_headings"> <tr> <th class="gt_col_heading gt_columns_bottom_border gt_left" rowspan="1" colspan="1"></th> <th class="gt_col_heading gt_columns_bottom_border gt_left" rowspan="1" colspan="1">date</th> <th class="gt_col_heading gt_columns_bottom_border gt_right" rowspan="1" colspan="1">close</th> <th class="gt_col_heading gt_columns_bottom_border gt_right" rowspan="1" colspan="1">volume</th> </tr> </thead> <tbody class="gt_table_body"> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-31</td> <td class="gt_row gt_right">2043.94</td> <td class="gt_row gt_right">2655330000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-30</td> <td class="gt_row gt_right">2063.36</td> <td class="gt_row gt_right">2367430000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-29</td> <td class="gt_row gt_right">2078.36</td> <td class="gt_row gt_right">2542000000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-28</td> <td class="gt_row gt_right">2056.50</td> <td class="gt_row gt_right">2492510000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-24</td> <td class="gt_row gt_right">2060.99</td> <td class="gt_row gt_right">1411860000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-23</td> <td class="gt_row gt_right">2064.29</td> <td class="gt_row gt_right">3484090000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-22</td> <td class="gt_row gt_right">2038.97</td> <td class="gt_row gt_right">3520860000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-21</td> <td class="gt_row gt_right">2021.15</td> <td class="gt_row gt_right">3760280000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-18</td> <td class="gt_row gt_right">2005.55</td> <td class="gt_row gt_right">6683070000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-17</td> <td class="gt_row gt_right">2041.89</td> <td class="gt_row gt_right">4327390000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-16</td> <td class="gt_row gt_right">2073.07</td> <td class="gt_row gt_right">4635450000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-15</td> <td class="gt_row gt_right">2043.41</td> <td class="gt_row gt_right">4353540000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-14</td> <td class="gt_row gt_right">2021.94</td> <td class="gt_row gt_right">4612440000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-11</td> <td class="gt_row gt_right">2012.37</td> <td class="gt_row gt_right">4301060000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-10</td> <td class="gt_row gt_right">2052.23</td> <td class="gt_row gt_right">3715150000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-09</td> <td class="gt_row gt_right">2047.62</td> <td class="gt_row gt_right">4385250000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-08</td> <td class="gt_row gt_right">2063.59</td> <td class="gt_row gt_right">4173570000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-07</td> <td class="gt_row gt_right">2077.07</td> <td class="gt_row gt_right">4043820000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-04</td> <td class="gt_row gt_right">2091.69</td> <td class="gt_row gt_right">4214910000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-03</td> <td class="gt_row gt_right">2049.62</td> <td class="gt_row gt_right">4306490000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-02</td> <td class="gt_row gt_right">2079.51</td> <td class="gt_row gt_right">3950640000</td></tr> <tr><td class="gt_row gt_right gt_stub"></td> <td class="gt_row gt_left">2015-12-01</td> <td class="gt_row gt_right">2102.63</td> <td class="gt_row gt_right">3712120000</td></tr> <tr><td class="gt_row gt_right gt_stub gt_grand_summary_row gt_first_grand_summary_row">Average close</td> <td class="gt_row gt_left gt_grand_summary_row gt_first_grand_summary_row">&mdash;</td> <td class="gt_row gt_right gt_grand_summary_row gt_first_grand_summary_row">2,054.08</td> <td class="gt_row gt_right gt_grand_summary_row gt_first_grand_summary_row">&mdash;</td></tr> <tr><td class="gt_row gt_right gt_stub gt_grand_summary_row gt_last_summary_row">Average volume weighted close</td> <td class="gt_row gt_left gt_grand_summary_row gt_last_summary_row">&mdash;</td> <td class="gt_row gt_right gt_grand_summary_row gt_last_summary_row">2,051.51</td> <td class="gt_row gt_right gt_grand_summary_row gt_last_summary_row">&mdash;</td></tr> </tbody> </table> </div> </body> </html>
priority
multiple column input to aggregate functions in summary rows aggregation functions only allows input from the column for which the summary value is for typical use case shown below where a volume weighted close price uses both close and volume as inputs to the aggregation function r suppresspackagestartupmessages library dplyr library gt slice head n select date close volume gt summary rows close groups null fns list average close mean summary rows close groups null fns list average volume weighted close sum volume sum volume error in summarise problem while computing close function x ℹ the error occurred in group group id grand summary caused by error in fn object volume not found created on by the i have cloned the project and changed dt summary r to allow access to all columns to get the desired ouput as shown below i have some missgivings about doing summaries inside gt and then extracting them i would have prefered to separate data including summaries and format completely anyways i can make a pull request for the summary change if desired r suppresspackagestartupmessages library dplyr library gt aggr slice head n select date close volume gt summary rows close groups null fns list average close mean summary rows close groups null fns list average volume weighted close sum volume sum volume created on by the date close volume average close mdash mdash average volume weighted close mdash mdash
1
589,652
17,755,244,074
IssuesEvent
2021-08-28 16:20:50
uki00a/carol
https://api.github.com/repos/uki00a/carol
closed
Fix flaky tests
bug high priority linux ci
- https://github.com/uki00a/carol/runs/704032173 - https://github.com/uki00a/carol/runs/703998960 ```shell test Application#onExit ... FAILED (96ms) at Object.receive (file:///home/runner/work/carol/carol/transport.ts:44:13) at async ChromeImpl.readLoop (file:///home/runner/work/carol/carol/chrome.ts:241:13) error: Uncaught ConnectionReset: Connection reset by peer (os error 104) at unwrapResponse ($deno$/ops/dispatch_minimal.ts:63:11) test custom executablePath ... Makefile:7: recipe for target 'test' failed at Object.sendAsyncMinimal ($deno$/ops/dispatch_minimal.ts:106:10) at async Object.write ($deno$/ops/io.ts:65:18) at async BufWriter.flush (https://deno.land/std@0.53.0/io/bufio.ts:479:25) at async writeFrame (https://deno.land/std@0.53.0/ws/mod.ts:144:3) make: *** [test] Error 1 ##[error]Process completed with exit code 2. ``` :thinking: :cry:
1.0
Fix flaky tests - - https://github.com/uki00a/carol/runs/704032173 - https://github.com/uki00a/carol/runs/703998960 ```shell test Application#onExit ... FAILED (96ms) at Object.receive (file:///home/runner/work/carol/carol/transport.ts:44:13) at async ChromeImpl.readLoop (file:///home/runner/work/carol/carol/chrome.ts:241:13) error: Uncaught ConnectionReset: Connection reset by peer (os error 104) at unwrapResponse ($deno$/ops/dispatch_minimal.ts:63:11) test custom executablePath ... Makefile:7: recipe for target 'test' failed at Object.sendAsyncMinimal ($deno$/ops/dispatch_minimal.ts:106:10) at async Object.write ($deno$/ops/io.ts:65:18) at async BufWriter.flush (https://deno.land/std@0.53.0/io/bufio.ts:479:25) at async writeFrame (https://deno.land/std@0.53.0/ws/mod.ts:144:3) make: *** [test] Error 1 ##[error]Process completed with exit code 2. ``` :thinking: :cry:
priority
fix flaky tests shell test application onexit failed at object receive file home runner work carol carol transport ts at async chromeimpl readloop file home runner work carol carol chrome ts error uncaught connectionreset connection reset by peer os error at unwrapresponse deno ops dispatch minimal ts test custom executablepath makefile recipe for target test failed at object sendasyncminimal deno ops dispatch minimal ts at async object write deno ops io ts at async bufwriter flush at async writeframe make error process completed with exit code thinking cry
1
286,460
8,788,574,771
IssuesEvent
2018-12-20 22:45:27
OregonDigital/OD2
https://api.github.com/repos/OregonDigital/OD2
closed
#show page broken
Bug Priority - High
### Descriptive summary Staging show page is broken failing with: ``` F, [2018-12-20T21:51:19.010722 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] ActionView::Template::Error (undefined method `join' for nil:NilClass): F, [2018-12-20T21:51:19.010924 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 8: <meta property="og:description" content="<%= @presenter.description.first.truncate(200) rescue @presenter.title.first %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 9: <meta property="og:image" content="<%= @presenter.download_url %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 10: <meta property="og:url" content="<%= polymorphic_url([main_app, @presenter]) %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 11: <meta name="twitter:data1" content="<%= @presenter.keyword.join(', ') %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 12: <meta name="twitter:label1" content="Keywords" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 13: <meta name="twitter:data2" content="<%= @presenter.rights_statement.first %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 14: <meta name="twitter:label2" content="Rights Statement" /> F, [2018-12-20T21:51:19.010979 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] F, [2018-12-20T21:51:19.011073 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] /usr/local/bundle/bundler/gems/hyrax-c0699505fc89/app/views/shared/_citations.html.erb:11:in `block in __usr_local_bundle_bundler_gems_hyrax_c_______fc___app_views_shared__citations_html_erb___3884050912110239123_46978725616740' ```
1.0
#show page broken - ### Descriptive summary Staging show page is broken failing with: ``` F, [2018-12-20T21:51:19.010722 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] ActionView::Template::Error (undefined method `join' for nil:NilClass): F, [2018-12-20T21:51:19.010924 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 8: <meta property="og:description" content="<%= @presenter.description.first.truncate(200) rescue @presenter.title.first %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 9: <meta property="og:image" content="<%= @presenter.download_url %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 10: <meta property="og:url" content="<%= polymorphic_url([main_app, @presenter]) %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 11: <meta name="twitter:data1" content="<%= @presenter.keyword.join(', ') %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 12: <meta name="twitter:label1" content="Keywords" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 13: <meta name="twitter:data2" content="<%= @presenter.rights_statement.first %>" /> [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] 14: <meta name="twitter:label2" content="Rights Statement" /> F, [2018-12-20T21:51:19.010979 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] F, [2018-12-20T21:51:19.011073 #34] FATAL -- : [1df4e8cc-eb9f-4040-bcfa-56ca012b6586] /usr/local/bundle/bundler/gems/hyrax-c0699505fc89/app/views/shared/_citations.html.erb:11:in `block in __usr_local_bundle_bundler_gems_hyrax_c_______fc___app_views_shared__citations_html_erb___3884050912110239123_46978725616740' ```
priority
show page broken descriptive summary staging show page is broken failing with f fatal actionview template error undefined method join for nil nilclass f fatal f fatal f fatal usr local bundle bundler gems hyrax app views shared citations html erb in block in usr local bundle bundler gems hyrax c fc app views shared citations html erb
1
514,326
14,937,340,663
IssuesEvent
2021-01-25 14:31:32
bounswe/bounswe2020group8
https://api.github.com/repos/bounswe/bounswe2020group8
closed
Guest user purchase
Priority: High Status: Completed enhancement web
**Describe the feature** Guest users should be able to purchase the products in their cart. **Describe solutions you've considered** The guest users do not have stored address and credit cart information, so they will be asked at the purchase confirm page.
1.0
Guest user purchase - **Describe the feature** Guest users should be able to purchase the products in their cart. **Describe solutions you've considered** The guest users do not have stored address and credit cart information, so they will be asked at the purchase confirm page.
priority
guest user purchase describe the feature guest users should be able to purchase the products in their cart describe solutions you ve considered the guest users do not have stored address and credit cart information so they will be asked at the purchase confirm page
1
438,344
12,626,674,487
IssuesEvent
2020-06-14 17:41:18
zulip/zulip
https://api.github.com/repos/zulip/zulip
closed
:checkered_flag: 🏁 emoji is missing from reactions popover
area: emoji bug priority: high
The emoji 🏁 U+1F3C1 CHEQUERED FLAG, which we call `checkered_flag` as its canonical name, works fine in the autocomplete in the compose box: ![image](https://user-images.githubusercontent.com/28173/84319006-29ff7000-ab24-11ea-91e1-5efd7bda60c5.png) But it doesn't appear in the popover search when you try to add an emoji reaction: ![image](https://user-images.githubusercontent.com/28173/84318962-15bb7300-ab24-11ea-8043-3a771b796f0d.png) Originally reported [in chat](https://chat.zulip.org/#narrow/stream/9-issues/topic/hidden.20emoji/near/900743).
1.0
:checkered_flag: 🏁 emoji is missing from reactions popover - The emoji 🏁 U+1F3C1 CHEQUERED FLAG, which we call `checkered_flag` as its canonical name, works fine in the autocomplete in the compose box: ![image](https://user-images.githubusercontent.com/28173/84319006-29ff7000-ab24-11ea-91e1-5efd7bda60c5.png) But it doesn't appear in the popover search when you try to add an emoji reaction: ![image](https://user-images.githubusercontent.com/28173/84318962-15bb7300-ab24-11ea-8043-3a771b796f0d.png) Originally reported [in chat](https://chat.zulip.org/#narrow/stream/9-issues/topic/hidden.20emoji/near/900743).
priority
checkered flag 🏁 emoji is missing from reactions popover the emoji 🏁 u chequered flag which we call checkered flag as its canonical name works fine in the autocomplete in the compose box but it doesn t appear in the popover search when you try to add an emoji reaction originally reported
1
129,622
5,100,082,952
IssuesEvent
2017-01-04 10:45:40
hpi-swt2/workshop-portal
https://api.github.com/repos/hpi-swt2/workshop-portal
closed
Consistent German language and meaningful error messages on the "Mein Profil"-Page
High Priority team-hendrik
**As** user **I want to** have consistent german language on the "Mein Profil" - Page and meaningful error messages **in order to** have a great user experience. **Estimate:** This is an important issue for the already existing system. In the future we expect this to be considered throughout the new features. - [ ] There is no mix of languages for names of input fields, error messages and the like - [ ] I get error messages that explain what went wrong and what i can do to fix it - [ ] Profile und Profile bearbeiten ändern
1.0
Consistent German language and meaningful error messages on the "Mein Profil"-Page - **As** user **I want to** have consistent german language on the "Mein Profil" - Page and meaningful error messages **in order to** have a great user experience. **Estimate:** This is an important issue for the already existing system. In the future we expect this to be considered throughout the new features. - [ ] There is no mix of languages for names of input fields, error messages and the like - [ ] I get error messages that explain what went wrong and what i can do to fix it - [ ] Profile und Profile bearbeiten ändern
priority
consistent german language and meaningful error messages on the mein profil page as user i want to have consistent german language on the mein profil page and meaningful error messages in order to have a great user experience estimate this is an important issue for the already existing system in the future we expect this to be considered throughout the new features there is no mix of languages for names of input fields error messages and the like i get error messages that explain what went wrong and what i can do to fix it profile und profile bearbeiten ändern
1
623,526
19,671,033,296
IssuesEvent
2022-01-11 07:16:50
kartoza/clean-cooking-platform
https://api.github.com/repos/kartoza/clean-cooking-platform
closed
Weights not included in the URL
High Priority
Weights are not included in the URL, so this means that even after setting a weight and then copy-pasting the link into a new window the weights of all datasets go back to 1. We need them in some of our presets, @dimasciput can you do the same thing for weights as you did for the filters as documented in #179?
1.0
Weights not included in the URL - Weights are not included in the URL, so this means that even after setting a weight and then copy-pasting the link into a new window the weights of all datasets go back to 1. We need them in some of our presets, @dimasciput can you do the same thing for weights as you did for the filters as documented in #179?
priority
weights not included in the url weights are not included in the url so this means that even after setting a weight and then copy pasting the link into a new window the weights of all datasets go back to we need them in some of our presets dimasciput can you do the same thing for weights as you did for the filters as documented in
1
126,919
5,007,579,924
IssuesEvent
2016-12-12 17:06:23
Codewars/codewars.com
https://api.github.com/repos/Codewars/codewars.com
closed
Eveytime I enter in my information the page refreshes and will not let me sign up.
bug high priority
Eveytime I enter in my information the page refreshes and will not let me sign up/enlist.
1.0
Eveytime I enter in my information the page refreshes and will not let me sign up. - Eveytime I enter in my information the page refreshes and will not let me sign up/enlist.
priority
eveytime i enter in my information the page refreshes and will not let me sign up eveytime i enter in my information the page refreshes and will not let me sign up enlist
1
819,150
30,722,135,228
IssuesEvent
2023-07-27 16:41:50
KinsonDigital/Infrastructure
https://api.github.com/repos/KinsonDigital/Infrastructure
opened
🚧Fix pathing issue in release workflow
🐛bug high priority
### I have done the items below . . . - [X] I have updated the title without removing the 🚧 emoji. ### Description In the _**dotnet-lib-release.yml**_ reusable workflow, fix a pathing issue in the `perform_release` job and `Creating ${{ inputs.release-type }} GitHub Release` step where the `body_path` and `files` for the `softprops/action-gh-release@v1` GitHub action workflow input values are incorrect. Also, check if this is the issue as well for the _**dotnet-action-release.yml**_ reusable workflow as well. ### Acceptance Criteria **This issue is finished when:** - [ ] dotnet lib release workflow updated ### ToDo Items - [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below. - [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below. - [X] Issue linked to the correct project. ### Issue Dependencies _No response_ ### Related Work _No response_ ### Additional Information: **_<details closed><summary>Change Type Labels</summary>_** | Change Type | Label | |---------------------|---------------------------| | Bug Fixes | `🐛bug` | | Breaking Changes | `🧨breaking changes` | | Enhancement | `enhancement` | | Workflow Changes | `workflow` | | Code Doc Changes | `🗒️documentation code` | | Product Doc Changes | `📝documentation product` | </details> **_<details closed><summary>Priority Type Labels</summary>_** | Priority Type | Label | |---------------------|--------------------------------------------------------------------------| | Low Priority | `low priority` | | Medium Priority | `medium priority` | | High Priority | `high priority` | </details> ### Code of Conduct - [X] I agree to follow this project's Code of Conduct.
1.0
🚧Fix pathing issue in release workflow - ### I have done the items below . . . - [X] I have updated the title without removing the 🚧 emoji. ### Description In the _**dotnet-lib-release.yml**_ reusable workflow, fix a pathing issue in the `perform_release` job and `Creating ${{ inputs.release-type }} GitHub Release` step where the `body_path` and `files` for the `softprops/action-gh-release@v1` GitHub action workflow input values are incorrect. Also, check if this is the issue as well for the _**dotnet-action-release.yml**_ reusable workflow as well. ### Acceptance Criteria **This issue is finished when:** - [ ] dotnet lib release workflow updated ### ToDo Items - [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below. - [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below. - [X] Issue linked to the correct project. ### Issue Dependencies _No response_ ### Related Work _No response_ ### Additional Information: **_<details closed><summary>Change Type Labels</summary>_** | Change Type | Label | |---------------------|---------------------------| | Bug Fixes | `🐛bug` | | Breaking Changes | `🧨breaking changes` | | Enhancement | `enhancement` | | Workflow Changes | `workflow` | | Code Doc Changes | `🗒️documentation code` | | Product Doc Changes | `📝documentation product` | </details> **_<details closed><summary>Priority Type Labels</summary>_** | Priority Type | Label | |---------------------|--------------------------------------------------------------------------| | Low Priority | `low priority` | | Medium Priority | `medium priority` | | High Priority | `high priority` | </details> ### Code of Conduct - [X] I agree to follow this project's Code of Conduct.
priority
🚧fix pathing issue in release workflow i have done the items below i have updated the title without removing the 🚧 emoji description in the dotnet lib release yml reusable workflow fix a pathing issue in the perform release job and creating inputs release type github release step where the body path and files for the softprops action gh release github action workflow input values are incorrect also check if this is the issue as well for the dotnet action release yml reusable workflow as well acceptance criteria this issue is finished when dotnet lib release workflow updated todo items priority label added to this issue refer to the priority type labels section below change type labels added to this issue refer to the change type labels section below issue linked to the correct project issue dependencies no response related work no response additional information change type labels change type label bug fixes 🐛bug breaking changes 🧨breaking changes enhancement enhancement workflow changes workflow code doc changes 🗒️documentation code product doc changes 📝documentation product priority type labels priority type label low priority low priority medium priority medium priority high priority high priority code of conduct i agree to follow this project s code of conduct
1
472,714
13,630,033,643
IssuesEvent
2020-09-24 15:53:05
AY2021S1-CS2103T-T09-2/tp
https://api.github.com/repos/AY2021S1-CS2103T-T09-2/tp
closed
As a student who has no knowledge of workout, I want to view what exercise routines the application has
priority.HIGH type.story
... so that I can choose for myself.
1.0
As a student who has no knowledge of workout, I want to view what exercise routines the application has - ... so that I can choose for myself.
priority
as a student who has no knowledge of workout i want to view what exercise routines the application has so that i can choose for myself
1
350,487
10,491,094,224
IssuesEvent
2019-09-25 10:20:18
conan-io/conan
https://api.github.com/repos/conan-io/conan
closed
Consider modify restriction of urllib3 at requirements.txt
complex: low priority: high stage: review type: engineering
Versions `1.25.4` and `1.25.5` are affected by a bug and should be excluded no matter what. If we could forbid these two versions and `1.25.X` is released to solve the [issue](https://github.com/urllib3/urllib3/issues/1683) we should change the `requirements.txt` file. We prefer to stay < 1.25.4 just in case they release another buggy version without the patch.
1.0
Consider modify restriction of urllib3 at requirements.txt - Versions `1.25.4` and `1.25.5` are affected by a bug and should be excluded no matter what. If we could forbid these two versions and `1.25.X` is released to solve the [issue](https://github.com/urllib3/urllib3/issues/1683) we should change the `requirements.txt` file. We prefer to stay < 1.25.4 just in case they release another buggy version without the patch.
priority
consider modify restriction of at requirements txt versions and are affected by a bug and should be excluded no matter what if we could forbid these two versions and x is released to solve the we should change the requirements txt file we prefer to stay just in case they release another buggy version without the patch
1
214,900
7,279,315,650
IssuesEvent
2018-02-22 03:40:22
borela/naomi
https://api.github.com/repos/borela/naomi
closed
js objects with dynamic values ignored
bug priority: high
Thank you for Naomi, it has helped me find bad javascript syntax within my code which I have corrected as I notice them. I encountered one issue however and am not sure if what I'm doing is bad code or something that is not being picked up by Naomi since it gets interpreted correctly. I believe the issue seems to be with me trying to dynamically determine the values of the object's properties within the initialization of the object itself: ![screen shot 2018-02-21 at 3 17 48 pm](https://user-images.githubusercontent.com/6924108/36503423-13d78cda-171b-11e8-9b15-e734083a9e1f.png)
1.0
js objects with dynamic values ignored - Thank you for Naomi, it has helped me find bad javascript syntax within my code which I have corrected as I notice them. I encountered one issue however and am not sure if what I'm doing is bad code or something that is not being picked up by Naomi since it gets interpreted correctly. I believe the issue seems to be with me trying to dynamically determine the values of the object's properties within the initialization of the object itself: ![screen shot 2018-02-21 at 3 17 48 pm](https://user-images.githubusercontent.com/6924108/36503423-13d78cda-171b-11e8-9b15-e734083a9e1f.png)
priority
js objects with dynamic values ignored thank you for naomi it has helped me find bad javascript syntax within my code which i have corrected as i notice them i encountered one issue however and am not sure if what i m doing is bad code or something that is not being picked up by naomi since it gets interpreted correctly i believe the issue seems to be with me trying to dynamically determine the values of the object s properties within the initialization of the object itself
1
673,000
22,918,107,127
IssuesEvent
2022-07-17 08:58:10
FTBTeam/FTB-Mods-Issues
https://api.github.com/repos/FTBTeam/FTB-Mods-Issues
closed
[Bug]: Changing Values in the Config files dont work
Bug Priority: High FTB Industrial Contraptions MC 1.19 MC 1.18
### Mod FTB IC ### Mod version ftb-industrial-contraptions-1802.1.6-build.182 ### Forge / Fabric version 1.18.2 40.1.31 ### Modpack & version _No response_ ### What issue are you having? ![2022-06-20_11 58 00](https://user-images.githubusercontent.com/73310949/174577336-1a94e199-80f8-4d1f-845f-fefa04a2c8f0.png) ![unknown](https://user-images.githubusercontent.com/73310949/174577396-31cccc7e-5d6c-4588-9e7a-82d012daeb39.png) ### Crashlogs _No response_ ### Steps to reproduce change config ### Anything else to note? _No response_
1.0
[Bug]: Changing Values in the Config files dont work - ### Mod FTB IC ### Mod version ftb-industrial-contraptions-1802.1.6-build.182 ### Forge / Fabric version 1.18.2 40.1.31 ### Modpack & version _No response_ ### What issue are you having? ![2022-06-20_11 58 00](https://user-images.githubusercontent.com/73310949/174577336-1a94e199-80f8-4d1f-845f-fefa04a2c8f0.png) ![unknown](https://user-images.githubusercontent.com/73310949/174577396-31cccc7e-5d6c-4588-9e7a-82d012daeb39.png) ### Crashlogs _No response_ ### Steps to reproduce change config ### Anything else to note? _No response_
priority
changing values in the config files dont work mod ftb ic mod version ftb industrial contraptions build forge fabric version modpack version no response what issue are you having crashlogs no response steps to reproduce change config anything else to note no response
1
770,717
27,052,527,274
IssuesEvent
2023-02-13 14:12:26
FastcampusMini/mini-project
https://api.github.com/repos/FastcampusMini/mini-project
closed
상품 검색 기능 개발
For: API For: Backend Priority: High Status: Completed Type: Feature
## Title 상품 검색 기능 개발 ## Description ProductSearch Repository, Service, Controller 만들기 우선 검색어 기반 검색 기능 개발 ## Tasks - [x] Repository 만들기 - [x] Service 만들기 - [x] Controller 만들기 - [x] 검색어 기반 검색 기능 개발
1.0
상품 검색 기능 개발 - ## Title 상품 검색 기능 개발 ## Description ProductSearch Repository, Service, Controller 만들기 우선 검색어 기반 검색 기능 개발 ## Tasks - [x] Repository 만들기 - [x] Service 만들기 - [x] Controller 만들기 - [x] 검색어 기반 검색 기능 개발
priority
상품 검색 기능 개발 title 상품 검색 기능 개발 description productsearch repository service controller 만들기 우선 검색어 기반 검색 기능 개발 tasks repository 만들기 service 만들기 controller 만들기 검색어 기반 검색 기능 개발
1
369,316
10,895,426,186
IssuesEvent
2019-11-19 10:37:44
OpenSRP/opensrp-client-chw-anc
https://api.github.com/repos/OpenSRP/opensrp-client-chw-anc
closed
Exclusive breastfeeding task in PNC is not working correctly
BA-specific High Priority
- [x] When No is select, No should appear in the underneath text for the completed task - [x] When Yes is select, Yes should appear in the underneath text for the completed task Currently, the opposite is happening - When I select No or Yes, the opposite appears Yes or No, respectively To replicate, 1. Do a PNC home visit 2. Select Yes in exclusive breastfeeding 3. Save 4. Text appears No and Yellow task coloring - The opposite should occur
1.0
Exclusive breastfeeding task in PNC is not working correctly - - [x] When No is select, No should appear in the underneath text for the completed task - [x] When Yes is select, Yes should appear in the underneath text for the completed task Currently, the opposite is happening - When I select No or Yes, the opposite appears Yes or No, respectively To replicate, 1. Do a PNC home visit 2. Select Yes in exclusive breastfeeding 3. Save 4. Text appears No and Yellow task coloring - The opposite should occur
priority
exclusive breastfeeding task in pnc is not working correctly when no is select no should appear in the underneath text for the completed task when yes is select yes should appear in the underneath text for the completed task currently the opposite is happening when i select no or yes the opposite appears yes or no respectively to replicate do a pnc home visit select yes in exclusive breastfeeding save text appears no and yellow task coloring the opposite should occur
1
111,370
4,469,448,582
IssuesEvent
2016-08-25 13:08:46
M4dNation/mpm
https://api.github.com/repos/M4dNation/mpm
closed
Meteor Stop
done enhancement high priority
# Meteor Stop ## Description de la feature Permettre l'arrêt d'une ou plusieurs instances Meteor. ## Raison de la feature Si un processus fantôme de Meteor est lancée, aucune fonctionnalité du framework ne permet de l'arrêter, ce qui pose des problèmes d'accès concurrent. Commande concernée: kill -9 `ps ax | grep node | grep meteor | awk '{print $1}'` ## Sources Equipe de développement.
1.0
Meteor Stop - # Meteor Stop ## Description de la feature Permettre l'arrêt d'une ou plusieurs instances Meteor. ## Raison de la feature Si un processus fantôme de Meteor est lancée, aucune fonctionnalité du framework ne permet de l'arrêter, ce qui pose des problèmes d'accès concurrent. Commande concernée: kill -9 `ps ax | grep node | grep meteor | awk '{print $1}'` ## Sources Equipe de développement.
priority
meteor stop meteor stop description de la feature permettre l arrêt d une ou plusieurs instances meteor raison de la feature si un processus fantôme de meteor est lancée aucune fonctionnalité du framework ne permet de l arrêter ce qui pose des problèmes d accès concurrent commande concernée kill ps ax grep node grep meteor awk print sources equipe de développement
1
71,641
3,366,122,828
IssuesEvent
2015-11-21 03:21:26
hackthesystemATX/doctrinr
https://api.github.com/repos/hackthesystemATX/doctrinr
reopened
User Flow Diagram
high priority
Someone needs to creat a low-fidelity diagram of the userflow. [Example] (https://www.google.com/search?q=userflow&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjg9KfPxaDJAhWCUogKHTYzC0cQ_AUICCgC&biw=1036&bih=805) Something simple so we can easily communicate it to a desiger/developer.
1.0
User Flow Diagram - Someone needs to creat a low-fidelity diagram of the userflow. [Example] (https://www.google.com/search?q=userflow&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjg9KfPxaDJAhWCUogKHTYzC0cQ_AUICCgC&biw=1036&bih=805) Something simple so we can easily communicate it to a desiger/developer.
priority
user flow diagram someone needs to creat a low fidelity diagram of the userflow something simple so we can easily communicate it to a desiger developer
1
186,333
6,735,325,572
IssuesEvent
2017-10-18 21:20:26
regnauld/netdot-redmine-test
https://api.github.com/repos/regnauld/netdot-redmine-test
closed
Write tool to graphically show IP space availability
Priority 3: High Tracker 1: Bug
--- Author Name: **Carlos Vicente** (Carlos Vicente) Original Redmine Issue: 7, http://localhost:3000/issues/7 Original Assignee: aaron - --- Should allow admin to visually select a block from "free" ones. Consider viewing space constrains with IPv6.
1.0
Write tool to graphically show IP space availability - --- Author Name: **Carlos Vicente** (Carlos Vicente) Original Redmine Issue: 7, http://localhost:3000/issues/7 Original Assignee: aaron - --- Should allow admin to visually select a block from "free" ones. Consider viewing space constrains with IPv6.
priority
write tool to graphically show ip space availability author name carlos vicente carlos vicente original redmine issue original assignee aaron should allow admin to visually select a block from free ones consider viewing space constrains with
1
482,612
13,910,653,204
IssuesEvent
2020-10-20 16:20:24
yalelibrary/YUL-DC
https://api.github.com/repos/yalelibrary/YUL-DC
opened
Re-Indexing from the management app should not clear discovery
high priority software engineering
**ISSSUE** Currently when you trigger a reindex from the management application, the discovery application is reset to 0 search results and then rebuilds the index. We should NEVER do this in a production environment. **ACCEPTANCE** When I trigger a re-index - [ ] Any solr IDs which don't correspond to a current OID in the management application are deleted - [ ] SolrIDs which correspond to a current OID are re-indexed without deleting them
1.0
Re-Indexing from the management app should not clear discovery - **ISSSUE** Currently when you trigger a reindex from the management application, the discovery application is reset to 0 search results and then rebuilds the index. We should NEVER do this in a production environment. **ACCEPTANCE** When I trigger a re-index - [ ] Any solr IDs which don't correspond to a current OID in the management application are deleted - [ ] SolrIDs which correspond to a current OID are re-indexed without deleting them
priority
re indexing from the management app should not clear discovery isssue currently when you trigger a reindex from the management application the discovery application is reset to search results and then rebuilds the index we should never do this in a production environment acceptance when i trigger a re index any solr ids which don t correspond to a current oid in the management application are deleted solrids which correspond to a current oid are re indexed without deleting them
1
178,927
6,620,230,097
IssuesEvent
2017-09-21 14:54:41
cuappdev/eatery
https://api.github.com/repos/cuappdev/eatery
closed
Filter bar not persisting
Priority: High Type: Bug
something changed the behavior of saving UserDefaults in the FilterBar
1.0
Filter bar not persisting - something changed the behavior of saving UserDefaults in the FilterBar
priority
filter bar not persisting something changed the behavior of saving userdefaults in the filterbar
1
386,799
11,450,703,741
IssuesEvent
2020-02-06 10:07:16
onaio/reveal-frontend
https://api.github.com/repos/onaio/reveal-frontend
opened
Increase test coverage for DrillDownTableLinkedCell
Priority: High
Increase test coverage to be >= 85% for: `src/components/DrillDownTableLinkedCell/index.tsx` Part of #658
1.0
Increase test coverage for DrillDownTableLinkedCell - Increase test coverage to be >= 85% for: `src/components/DrillDownTableLinkedCell/index.tsx` Part of #658
priority
increase test coverage for drilldowntablelinkedcell increase test coverage to be for src components drilldowntablelinkedcell index tsx part of
1
197,601
6,961,874,985
IssuesEvent
2017-12-08 11:13:46
sevin7676/SurvivalArenaTracking
https://api.github.com/repos/sevin7676/SurvivalArenaTracking
opened
Tane is vastly over powered in PvP
Priority#1: High Status#1: Reported Type#2: Issue (not bug)
Prior to the release of Tane, there were many complaints the that Sorceress is over powered in PvP, which is mostly true *(espesically given that Scarlett Blade is useless due to a bug https://github.com/sevin7676/SurvivalArenaTracking/issues/1)*. Now that Tane is released, the Sorceress is very weak compared to Tane *(Scarlett Blade and Toadback are even worse off)*. Here is a comparison of Tane to the Sorceress: 1. Tane has more DPS than the Sorceress *(according to his card, but actual DPS appears to be slightly lower from in game testing - which is yet another bug: https://github.com/sevin7676/SurvivalArenaTracking/issues/14) * 2. Tane does splash damage to towers, which lets him easily take out 3+ towers at a time *(often in one hit if he is a high enough level)* 3. Tane has more than double the HP of the Sorceress (at level 20) 4. Tane moves faster than the Sorceress
1.0
Tane is vastly over powered in PvP - Prior to the release of Tane, there were many complaints the that Sorceress is over powered in PvP, which is mostly true *(espesically given that Scarlett Blade is useless due to a bug https://github.com/sevin7676/SurvivalArenaTracking/issues/1)*. Now that Tane is released, the Sorceress is very weak compared to Tane *(Scarlett Blade and Toadback are even worse off)*. Here is a comparison of Tane to the Sorceress: 1. Tane has more DPS than the Sorceress *(according to his card, but actual DPS appears to be slightly lower from in game testing - which is yet another bug: https://github.com/sevin7676/SurvivalArenaTracking/issues/14) * 2. Tane does splash damage to towers, which lets him easily take out 3+ towers at a time *(often in one hit if he is a high enough level)* 3. Tane has more than double the HP of the Sorceress (at level 20) 4. Tane moves faster than the Sorceress
priority
tane is vastly over powered in pvp prior to the release of tane there were many complaints the that sorceress is over powered in pvp which is mostly true espesically given that scarlett blade is useless due to a bug now that tane is released the sorceress is very weak compared to tane scarlett blade and toadback are even worse off here is a comparison of tane to the sorceress tane has more dps than the sorceress according to his card but actual dps appears to be slightly lower from in game testing which is yet another bug tane does splash damage to towers which lets him easily take out towers at a time often in one hit if he is a high enough level tane has more than double the hp of the sorceress at level tane moves faster than the sorceress
1
602,015
18,445,391,123
IssuesEvent
2021-10-15 00:45:57
AY2122S1-CS2103T-T10-4/tp
https://api.github.com/repos/AY2122S1-CS2103T-T10-4/tp
closed
Add genre command
priority.High type.Story
As an organized user, I want to specify the genre of the animes, so that I can keep track of which anime is of which genre.
1.0
Add genre command - As an organized user, I want to specify the genre of the animes, so that I can keep track of which anime is of which genre.
priority
add genre command as an organized user i want to specify the genre of the animes so that i can keep track of which anime is of which genre
1
326,946
9,962,938,104
IssuesEvent
2019-07-07 18:51:15
perry-mitchell/webdav-client
https://api.github.com/repos/perry-mitchell/webdav-client
closed
digest authentication support
Effort: High Priority: High Status: Available Type: Enhancement
Hi, looks like only Basic authentication is currently supported? Would it be possible/easy to add Digest authentication? I am trying to connect to a server that does not appear to support Basic Auth. I get the following error. ``` No 'Authorization: Digest' header found. Either the client didn't send one, or the server is misconfigured ```
1.0
digest authentication support - Hi, looks like only Basic authentication is currently supported? Would it be possible/easy to add Digest authentication? I am trying to connect to a server that does not appear to support Basic Auth. I get the following error. ``` No 'Authorization: Digest' header found. Either the client didn't send one, or the server is misconfigured ```
priority
digest authentication support hi looks like only basic authentication is currently supported would it be possible easy to add digest authentication i am trying to connect to a server that does not appear to support basic auth i get the following error no authorization digest header found either the client didn t send one or the server is misconfigured
1
396,366
11,708,483,274
IssuesEvent
2020-03-08 13:35:00
agreco/monadical
https://api.github.com/repos/agreco/monadical
closed
Remove index.ts circular dependencies
high priority
`index.ts` exports all modules and types, this unfortunately produces a circular dependency. For instance, the following exhibits this issue: ``` # a.ts import Either from 'monadical/either'; a.ts -> either.ts -> index.ts -> either.ts ``` To correct this, index.ts should only export all types, meaning importing the whole library will no longer be supported, since there are other approaches to include the whole library, i.e. es6's star import `import * as Monadical from 'monadical';`. This will inevitably create a breaking change since and will therefore require a major version update.
1.0
Remove index.ts circular dependencies - `index.ts` exports all modules and types, this unfortunately produces a circular dependency. For instance, the following exhibits this issue: ``` # a.ts import Either from 'monadical/either'; a.ts -> either.ts -> index.ts -> either.ts ``` To correct this, index.ts should only export all types, meaning importing the whole library will no longer be supported, since there are other approaches to include the whole library, i.e. es6's star import `import * as Monadical from 'monadical';`. This will inevitably create a breaking change since and will therefore require a major version update.
priority
remove index ts circular dependencies index ts exports all modules and types this unfortunately produces a circular dependency for instance the following exhibits this issue a ts import either from monadical either a ts either ts index ts either ts to correct this index ts should only export all types meaning importing the whole library will no longer be supported since there are other approaches to include the whole library i e s star import import as monadical from monadical this will inevitably create a breaking change since and will therefore require a major version update
1
74,871
3,449,137,062
IssuesEvent
2015-12-16 12:07:14
ReneFGJr/CentroIntegradoPesquisa
https://api.github.com/repos/ReneFGJr/CentroIntegradoPesquisa
opened
Gerar arquivo de pagamento txt
High Priority
Botão [Gerar arquivo] dos pagamentos não está habilitando ![image](https://cloud.githubusercontent.com/assets/10881293/11840332/2a162aec-a3dc-11e5-8534-431b22958b0e.png) ![image](https://cloud.githubusercontent.com/assets/10881293/11840351/54ee1ef0-a3dc-11e5-99a4-349de059f25f.png)
1.0
Gerar arquivo de pagamento txt - Botão [Gerar arquivo] dos pagamentos não está habilitando ![image](https://cloud.githubusercontent.com/assets/10881293/11840332/2a162aec-a3dc-11e5-8534-431b22958b0e.png) ![image](https://cloud.githubusercontent.com/assets/10881293/11840351/54ee1ef0-a3dc-11e5-99a4-349de059f25f.png)
priority
gerar arquivo de pagamento txt botão dos pagamentos não está habilitando
1
570,184
17,020,639,381
IssuesEvent
2021-07-02 18:27:36
qoretechnologies/qorus-vscode
https://api.github.com/repos/qoretechnologies/qorus-vscode
opened
[BUG] webview crashes handling types / mappers / service connectors
bug high-priority
not sure what happens or of there are separate issues - two ways to reproduce - both using the `building-block` repo in branch `feature/example_chatbot`: first example: 1) open service `chatbot-ws-demo` 2) open class connections and open the `WebSockets` connection 3) add an existing mapper between the two actions - select the `chatbot-ws-input` mapper 4) try to save the service - it will time out and cannot be saved second example: 1) open service `chatbot-ws-demo` 2) open class connections and open the `WebSockets` connection 3) create a new mapper between the two actions - the connectors provide types that can be graphically mapped 4) map `msg` -> `msg` and `cid` -> `id` and try to save - it times out and cannot be saved
1.0
[BUG] webview crashes handling types / mappers / service connectors - not sure what happens or of there are separate issues - two ways to reproduce - both using the `building-block` repo in branch `feature/example_chatbot`: first example: 1) open service `chatbot-ws-demo` 2) open class connections and open the `WebSockets` connection 3) add an existing mapper between the two actions - select the `chatbot-ws-input` mapper 4) try to save the service - it will time out and cannot be saved second example: 1) open service `chatbot-ws-demo` 2) open class connections and open the `WebSockets` connection 3) create a new mapper between the two actions - the connectors provide types that can be graphically mapped 4) map `msg` -> `msg` and `cid` -> `id` and try to save - it times out and cannot be saved
priority
webview crashes handling types mappers service connectors not sure what happens or of there are separate issues two ways to reproduce both using the building block repo in branch feature example chatbot first example open service chatbot ws demo open class connections and open the websockets connection add an existing mapper between the two actions select the chatbot ws input mapper try to save the service it will time out and cannot be saved second example open service chatbot ws demo open class connections and open the websockets connection create a new mapper between the two actions the connectors provide types that can be graphically mapped map msg msg and cid id and try to save it times out and cannot be saved
1
516,724
14,986,963,582
IssuesEvent
2021-01-28 22:03:18
HEXRD/hexrdgui
https://api.github.com/repos/HEXRD/hexrdgui
closed
Precision limitation leading to rounding error in omega specs in image loader
hedm high priority invalid
I just noticed this one: I have an imageseries set from chess with 1441 frames spanning (0, 360). Thus the delta omega reflected in the metadata on the imageseries object is 360./1441 ~ 0.249826509. However, when I load these imageseries series via the HEDM-style "Load Data" widget, I see the following: ![image](https://user-images.githubusercontent.com/1154130/105419359-6f743780-5bf3-11eb-9b7b-86611a06fafe.png) The precision limit of 2 decimal places on the delta omega is leading to an erroneous omega max value of 360.25; now, I am not entirely sure how these numbers are getting used, since the imageseries already has metadata attached (@psavery or @bnmajor could you tell me how/where these values get used? Obviously, if you are loading raw images, this is the only way to input the omega metadata, so I imagine these values are indeed important) This bad omega information would cause find-orientations and fit-grains to fail, the range *cannot* exceed 360. In this case, I cannot enter the accurate delta omega manually, and the omega max seems to be slaved to the delta omega value. We will at the very least require a higher degree of precision in these fields to avoid this rounding error (single precision should suffice). We should also raise a warning if the range exceeds 360.
1.0
Precision limitation leading to rounding error in omega specs in image loader - I just noticed this one: I have an imageseries set from chess with 1441 frames spanning (0, 360). Thus the delta omega reflected in the metadata on the imageseries object is 360./1441 ~ 0.249826509. However, when I load these imageseries series via the HEDM-style "Load Data" widget, I see the following: ![image](https://user-images.githubusercontent.com/1154130/105419359-6f743780-5bf3-11eb-9b7b-86611a06fafe.png) The precision limit of 2 decimal places on the delta omega is leading to an erroneous omega max value of 360.25; now, I am not entirely sure how these numbers are getting used, since the imageseries already has metadata attached (@psavery or @bnmajor could you tell me how/where these values get used? Obviously, if you are loading raw images, this is the only way to input the omega metadata, so I imagine these values are indeed important) This bad omega information would cause find-orientations and fit-grains to fail, the range *cannot* exceed 360. In this case, I cannot enter the accurate delta omega manually, and the omega max seems to be slaved to the delta omega value. We will at the very least require a higher degree of precision in these fields to avoid this rounding error (single precision should suffice). We should also raise a warning if the range exceeds 360.
priority
precision limitation leading to rounding error in omega specs in image loader i just noticed this one i have an imageseries set from chess with frames spanning thus the delta omega reflected in the metadata on the imageseries object is however when i load these imageseries series via the hedm style load data widget i see the following the precision limit of decimal places on the delta omega is leading to an erroneous omega max value of now i am not entirely sure how these numbers are getting used since the imageseries already has metadata attached psavery or bnmajor could you tell me how where these values get used obviously if you are loading raw images this is the only way to input the omega metadata so i imagine these values are indeed important this bad omega information would cause find orientations and fit grains to fail the range cannot exceed in this case i cannot enter the accurate delta omega manually and the omega max seems to be slaved to the delta omega value we will at the very least require a higher degree of precision in these fields to avoid this rounding error single precision should suffice we should also raise a warning if the range exceeds
1
272,845
8,518,333,665
IssuesEvent
2018-11-01 11:13:24
digital-york/oasis
https://api.github.com/repos/digital-york/oasis
opened
Set PDF as file to download
Priority - high question
For every summary we upload a PDF and a word document. We need a copy of the word document in case we need to make any changes later. However, when someone downloads a summary we would like it to be the PDF. I think this used to be the case, but something has changed. For the following summary, for example, the download is is the word doc: https://oasis-database.org/concern/summaries/ws859f652?locale=en I've tried changing this in the file manager, but that doesn't seem to do anything.
1.0
Set PDF as file to download - For every summary we upload a PDF and a word document. We need a copy of the word document in case we need to make any changes later. However, when someone downloads a summary we would like it to be the PDF. I think this used to be the case, but something has changed. For the following summary, for example, the download is is the word doc: https://oasis-database.org/concern/summaries/ws859f652?locale=en I've tried changing this in the file manager, but that doesn't seem to do anything.
priority
set pdf as file to download for every summary we upload a pdf and a word document we need a copy of the word document in case we need to make any changes later however when someone downloads a summary we would like it to be the pdf i think this used to be the case but something has changed for the following summary for example the download is is the word doc i ve tried changing this in the file manager but that doesn t seem to do anything
1
168,414
6,370,965,336
IssuesEvent
2017-08-01 15:09:54
mattm401/collabortweet
https://api.github.com/repos/mattm401/collabortweet
closed
Python Script 2: IRR Calculation
High Priority TODO
I need a python script that can convert the export from the database into the document that we need to run our IRR calculations. I will push some example files to our repository in my next push. - **IRR1_189_Exported.csv**: This file is what I am able to extract from the database using the following SQL _SELECT elements.externalId, elementLabels.elementId, elementLabels.labelId, elementLabels.userId FROM elementLabels join elements on elementLabels.elementId = elements.elementId order by elementLabels.elementId ASC_ I need the python script to convert this out CSV into the following XLS file: **IRR1_Recal_Input.xls** Then, save copy of the file without the first element column in it and as a CSV. You should then be able to run this csv file through the following website: http://dfreelon.org/utils/recalfront/recal2/ and get the same results as in: **IRR1_Recal_Output.csv** - Your script should be able to handle the number of fields (label count x2) automatically if possible. - Column names with the identification of user is important for verification (e.g., "Product Review (2)" refers to the user_id 2 in the database.
1.0
Python Script 2: IRR Calculation - I need a python script that can convert the export from the database into the document that we need to run our IRR calculations. I will push some example files to our repository in my next push. - **IRR1_189_Exported.csv**: This file is what I am able to extract from the database using the following SQL _SELECT elements.externalId, elementLabels.elementId, elementLabels.labelId, elementLabels.userId FROM elementLabels join elements on elementLabels.elementId = elements.elementId order by elementLabels.elementId ASC_ I need the python script to convert this out CSV into the following XLS file: **IRR1_Recal_Input.xls** Then, save copy of the file without the first element column in it and as a CSV. You should then be able to run this csv file through the following website: http://dfreelon.org/utils/recalfront/recal2/ and get the same results as in: **IRR1_Recal_Output.csv** - Your script should be able to handle the number of fields (label count x2) automatically if possible. - Column names with the identification of user is important for verification (e.g., "Product Review (2)" refers to the user_id 2 in the database.
priority
python script irr calculation i need a python script that can convert the export from the database into the document that we need to run our irr calculations i will push some example files to our repository in my next push exported csv this file is what i am able to extract from the database using the following sql select elements externalid elementlabels elementid elementlabels labelid elementlabels userid from elementlabels join elements on elementlabels elementid elements elementid order by elementlabels elementid asc i need the python script to convert this out csv into the following xls file recal input xls then save copy of the file without the first element column in it and as a csv you should then be able to run this csv file through the following website and get the same results as in recal output csv your script should be able to handle the number of fields label count automatically if possible column names with the identification of user is important for verification e g product review refers to the user id in the database
1
763,818
26,776,010,385
IssuesEvent
2023-01-31 17:11:25
Kong/kubernetes-testing-framework
https://api.github.com/repos/Kong/kubernetes-testing-framework
closed
Support updated GKE auth system
area/feature priority/high
Per https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke the 1.25 release will begin requiring a kubectl plugin to use kubectl with GKE. This likely requires changes to client-go usage (to use the same [plugin system](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) used by the kubectl plugin) when not using in-cluster configuration. We need to: - Determine how to use the plugin in KTF's kubectl invocations, or if not possible try to warn about its absence. - Determine if we need to make changes to client-go usage. - Develop recommendations for downstream, which will probably need to make similar changes.
1.0
Support updated GKE auth system - Per https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke the 1.25 release will begin requiring a kubectl plugin to use kubectl with GKE. This likely requires changes to client-go usage (to use the same [plugin system](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) used by the kubectl plugin) when not using in-cluster configuration. We need to: - Determine how to use the plugin in KTF's kubectl invocations, or if not possible try to warn about its absence. - Determine if we need to make changes to client-go usage. - Develop recommendations for downstream, which will probably need to make similar changes.
priority
support updated gke auth system per the release will begin requiring a kubectl plugin to use kubectl with gke this likely requires changes to client go usage to use the same used by the kubectl plugin when not using in cluster configuration we need to determine how to use the plugin in ktf s kubectl invocations or if not possible try to warn about its absence determine if we need to make changes to client go usage develop recommendations for downstream which will probably need to make similar changes
1
411,999
12,034,389,518
IssuesEvent
2020-04-13 15:57:25
AY1920S2-CS2103T-F10-1/main
https://api.github.com/repos/AY1920S2-CS2103T-F10-1/main
closed
Nitpicks for UG/DG
priority.High
Just a reminder for all of us: - To always write something after a heading (I forgot what this is called hahahha) - To be consistent in the case (title case or sentence case) for all headings Everyone else please chip in :D
1.0
Nitpicks for UG/DG - Just a reminder for all of us: - To always write something after a heading (I forgot what this is called hahahha) - To be consistent in the case (title case or sentence case) for all headings Everyone else please chip in :D
priority
nitpicks for ug dg just a reminder for all of us to always write something after a heading i forgot what this is called hahahha to be consistent in the case title case or sentence case for all headings everyone else please chip in d
1
198,070
6,969,482,031
IssuesEvent
2017-12-11 05:45:40
intel-analytics/BigDL
https://api.github.com/repos/intel-analytics/BigDL
closed
Result is not match on different platform
high priority Keras support
__For test_lstm:__ 1. keras give the same output for both ubuntu and Mac 2. The output of BigDL is the same with Keras on Ubuntu 3. The output of BigDL on Mac is wrong if we suppose the output on Ubuntu is correct. 4. This would only happen on conv1,2,3D and those recurrent cells. __Ubuntu:__ bigdl output: ``` [[[ 0.45764956 0.32445318 0.42016813 0.49488869 0.51789129] __[ 0.92467415 0.8868469 0.89551592 0.9386903 0.93798643] __ [ 0.9894557 0.98381865 0.9850986 0.99147439 0.99136877] [ 0.99856597 0.9977895 0.99795699 0.9988417 0.9988271 ]] [[ 0.60128599 0.42217481 0.58237129 0.64212716 0.6408878 ] [ 0.94839323 0.91459364 0.92085594 0.95556968 0.95170385] [ 0.99284923 0.98796624 0.98883897 0.99386799 0.99332124] [ 0.99902689 0.99835765 0.99803543 0.99916762 0.99909288]] [[ 0.48004183 0.38841489 0.40418252 0.47224009 0.50881928] [ 0.92917424 0.90413231 0.90907615 0.94264346 0.93752491] [ 0.99010855 0.98640603 0.98711658 0.99203897 0.99130273] [ 0.99865466 0.9981389 0.99763829 0.99891853 0.99881738]]] ``` keras output: ``` [[[ 0.45764959 0.3244532 0.42016816 0.49488872 0.51789135] [ 0.92467415 0.8868469 0.89551598 0.93869025 0.93798631] [ 0.98945576 0.98381865 0.98509866 0.99147439 0.99136883] [ 0.99856603 0.99778944 0.99795687 0.9988417 0.9988271 ]] [[ 0.60128599 0.42217487 0.58237123 0.64212716 0.64088786] [ 0.94839329 0.91459358 0.92085588 0.95556974 0.95170379] [ 0.99284929 0.9879663 0.98883897 0.99386805 0.99332112] [ 0.99902695 0.99835759 0.99803537 0.99916768 0.99909276]] [[ 0.48004186 0.38841495 0.40418252 0.47224003 0.50881928] [ 0.92917424 0.90413237 0.90907615 0.94264346 0.93752486] [ 0.99010861 0.98640597 0.98711663 0.99203902 0.99130279] [ 0.99865466 0.99813884 0.99763823 0.99891865 0.99881732]]] ``` __Mac platform:__ bigdl output: ``` [[[ 0.45764956 0.32445318 0.42016813 0.49488869 0.51789129] __ [ 0.86819339 0.59303617 0.67777336 0.81466472 0.87991822]__ [ 0.97453725 0.72824097 0.68955195 0.80932516 0.92047256] [ 0.9919185 0.77264863 0.72451639 0.88131893 0.96598411]] ``` keras output: ``` [[[ 0.45764959 0.3244532 0.42016816 0.49488872 0.51789135] [ 0.92467415 0.8868469 0.89551598 0.93869025 0.93798631] [ 0.98945576 0.98381865 0.98509866 0.99147439 0.99136883] [ 0.99856603 0.99778944 0.99795687 0.9988417 0.9988271 ]] ```
1.0
Result is not match on different platform - __For test_lstm:__ 1. keras give the same output for both ubuntu and Mac 2. The output of BigDL is the same with Keras on Ubuntu 3. The output of BigDL on Mac is wrong if we suppose the output on Ubuntu is correct. 4. This would only happen on conv1,2,3D and those recurrent cells. __Ubuntu:__ bigdl output: ``` [[[ 0.45764956 0.32445318 0.42016813 0.49488869 0.51789129] __[ 0.92467415 0.8868469 0.89551592 0.9386903 0.93798643] __ [ 0.9894557 0.98381865 0.9850986 0.99147439 0.99136877] [ 0.99856597 0.9977895 0.99795699 0.9988417 0.9988271 ]] [[ 0.60128599 0.42217481 0.58237129 0.64212716 0.6408878 ] [ 0.94839323 0.91459364 0.92085594 0.95556968 0.95170385] [ 0.99284923 0.98796624 0.98883897 0.99386799 0.99332124] [ 0.99902689 0.99835765 0.99803543 0.99916762 0.99909288]] [[ 0.48004183 0.38841489 0.40418252 0.47224009 0.50881928] [ 0.92917424 0.90413231 0.90907615 0.94264346 0.93752491] [ 0.99010855 0.98640603 0.98711658 0.99203897 0.99130273] [ 0.99865466 0.9981389 0.99763829 0.99891853 0.99881738]]] ``` keras output: ``` [[[ 0.45764959 0.3244532 0.42016816 0.49488872 0.51789135] [ 0.92467415 0.8868469 0.89551598 0.93869025 0.93798631] [ 0.98945576 0.98381865 0.98509866 0.99147439 0.99136883] [ 0.99856603 0.99778944 0.99795687 0.9988417 0.9988271 ]] [[ 0.60128599 0.42217487 0.58237123 0.64212716 0.64088786] [ 0.94839329 0.91459358 0.92085588 0.95556974 0.95170379] [ 0.99284929 0.9879663 0.98883897 0.99386805 0.99332112] [ 0.99902695 0.99835759 0.99803537 0.99916768 0.99909276]] [[ 0.48004186 0.38841495 0.40418252 0.47224003 0.50881928] [ 0.92917424 0.90413237 0.90907615 0.94264346 0.93752486] [ 0.99010861 0.98640597 0.98711663 0.99203902 0.99130279] [ 0.99865466 0.99813884 0.99763823 0.99891865 0.99881732]]] ``` __Mac platform:__ bigdl output: ``` [[[ 0.45764956 0.32445318 0.42016813 0.49488869 0.51789129] __ [ 0.86819339 0.59303617 0.67777336 0.81466472 0.87991822]__ [ 0.97453725 0.72824097 0.68955195 0.80932516 0.92047256] [ 0.9919185 0.77264863 0.72451639 0.88131893 0.96598411]] ``` keras output: ``` [[[ 0.45764959 0.3244532 0.42016816 0.49488872 0.51789135] [ 0.92467415 0.8868469 0.89551598 0.93869025 0.93798631] [ 0.98945576 0.98381865 0.98509866 0.99147439 0.99136883] [ 0.99856603 0.99778944 0.99795687 0.9988417 0.9988271 ]] ```
priority
result is not match on different platform for test lstm keras give the same output for both ubuntu and mac the output of bigdl is the same with keras on ubuntu the output of bigdl on mac is wrong if we suppose the output on ubuntu is correct this would only happen on and those recurrent cells ubuntu bigdl output keras output mac platform bigdl output keras output
1
669,458
22,625,921,186
IssuesEvent
2022-06-30 10:37:53
canonical-web-and-design/ubuntu.com
https://api.github.com/repos/canonical-web-and-design/ubuntu.com
closed
Error 500 when searching
Priority: High
When searching in the blog, for instance: https://ubuntu.com/search?q=observability I'm getting a 500 error: ![imagen](https://user-images.githubusercontent.com/939888/176435186-b39d1ac6-db2c-4f47-865d-1c2fc3125f76.png)
1.0
Error 500 when searching - When searching in the blog, for instance: https://ubuntu.com/search?q=observability I'm getting a 500 error: ![imagen](https://user-images.githubusercontent.com/939888/176435186-b39d1ac6-db2c-4f47-865d-1c2fc3125f76.png)
priority
error when searching when searching in the blog for instance i m getting a error
1
165,115
6,263,854,740
IssuesEvent
2017-07-16 00:34:14
OperationCode/operationcode_frontend
https://api.github.com/repos/OperationCode/operationcode_frontend
closed
<Section> components with <p> elements as children don't have good responsiveness
in progress Priority: High Status: Available Type: Bug
# Bug Report ## What is the current behavior? If you reduce your screen to under just 1270px, you will see many pages like /code_schools or /team that use the `<Section>` component have text that gets scrunched up. It actually overflows at multiple view widths. ## What is the expected behavior? Children elements under the `<Section>` component should not go beyond it's potential header lines in width. I have an idea of how to implement this (posted as a comment) ## What steps did you take to get this behavior? Simply go the routes, reduce your view width, and observe ## Additional Info ### Operating System MacOS 10.12.5 ### Browser All 3 major browsers, plus Chrome and Safari on iPhones ### Screenshots ![schools_overflow](https://user-images.githubusercontent.com/9523719/27990683-222b992c-6413-11e7-9057-e221a21cc0f8.png) ![team_overflow](https://user-images.githubusercontent.com/9523719/27990684-2236cdce-6413-11e7-97d6-b21657631067.png)
1.0
<Section> components with <p> elements as children don't have good responsiveness - # Bug Report ## What is the current behavior? If you reduce your screen to under just 1270px, you will see many pages like /code_schools or /team that use the `<Section>` component have text that gets scrunched up. It actually overflows at multiple view widths. ## What is the expected behavior? Children elements under the `<Section>` component should not go beyond it's potential header lines in width. I have an idea of how to implement this (posted as a comment) ## What steps did you take to get this behavior? Simply go the routes, reduce your view width, and observe ## Additional Info ### Operating System MacOS 10.12.5 ### Browser All 3 major browsers, plus Chrome and Safari on iPhones ### Screenshots ![schools_overflow](https://user-images.githubusercontent.com/9523719/27990683-222b992c-6413-11e7-9057-e221a21cc0f8.png) ![team_overflow](https://user-images.githubusercontent.com/9523719/27990684-2236cdce-6413-11e7-97d6-b21657631067.png)
priority
components with elements as children don t have good responsiveness bug report what is the current behavior if you reduce your screen to under just you will see many pages like code schools or team that use the component have text that gets scrunched up it actually overflows at multiple view widths what is the expected behavior children elements under the component should not go beyond it s potential header lines in width i have an idea of how to implement this posted as a comment what steps did you take to get this behavior simply go the routes reduce your view width and observe additional info operating system macos browser all major browsers plus chrome and safari on iphones screenshots
1
745,670
25,994,670,378
IssuesEvent
2022-12-20 10:33:03
ballerina-platform/ballerina-standard-library
https://api.github.com/repos/ballerina-platform/ballerina-standard-library
closed
HTTP trace logs are not working with a mysql client
Priority/High Type/Bug module/sql Team/DIU Reason/EngineeringMistake
**Description:** > $Subject **Steps to reproduce:** Ballerina service : ```ballerina import ballerina/http; import ballerinax/mysql; import ballerinax/mysql.driver as _; configurable string host = "localhost"; configurable int port = 3306; configurable string user = "root"; configurable string password = "root@123"; configurable string database = "ballerina"; mysql:Client db = check new (host, user, password, database, port); listener http:Listener serverEP = new(8080); service on serverEP { resource function 'default [string ...path]() returns string { return "HelloWorld"; } } ``` Run the service while enabling the tracelogs in the console : ``` bal run -- -Cballerina.http.traceLogConsole=true ``` Call the endpoint : ``` curl -v http://localhost:8080/albums ``` The trace logs are not printed. When removing the mysql client the trace logs are printed. **Affected Versions:** Ballerina SwanLake 2201.3.0
1.0
HTTP trace logs are not working with a mysql client - **Description:** > $Subject **Steps to reproduce:** Ballerina service : ```ballerina import ballerina/http; import ballerinax/mysql; import ballerinax/mysql.driver as _; configurable string host = "localhost"; configurable int port = 3306; configurable string user = "root"; configurable string password = "root@123"; configurable string database = "ballerina"; mysql:Client db = check new (host, user, password, database, port); listener http:Listener serverEP = new(8080); service on serverEP { resource function 'default [string ...path]() returns string { return "HelloWorld"; } } ``` Run the service while enabling the tracelogs in the console : ``` bal run -- -Cballerina.http.traceLogConsole=true ``` Call the endpoint : ``` curl -v http://localhost:8080/albums ``` The trace logs are not printed. When removing the mysql client the trace logs are printed. **Affected Versions:** Ballerina SwanLake 2201.3.0
priority
http trace logs are not working with a mysql client description subject steps to reproduce ballerina service ballerina import ballerina http import ballerinax mysql import ballerinax mysql driver as configurable string host localhost configurable int port configurable string user root configurable string password root configurable string database ballerina mysql client db check new host user password database port listener http listener serverep new service on serverep resource function default returns string return helloworld run the service while enabling the tracelogs in the console bal run cballerina http tracelogconsole true call the endpoint curl v the trace logs are not printed when removing the mysql client the trace logs are printed affected versions ballerina swanlake
1
423,466
12,297,239,247
IssuesEvent
2020-05-11 08:29:54
SainsburyWellcomeCentre/cellfinder
https://api.github.com/repos/SainsburyWellcomeCentre/cellfinder
closed
[BUG]error message with reverse transformation in 'cellfinder_cell_standard'
HIGH PRIORITY bug
Hi Adam, I'm trying to reverse transform simulated electrode track coordinates to atlas space, with 'cellfinder_cell_standard' command on windows 10. Amap registration was successful previously. 190917_488_hor_coronal_10um.nii (raw images), downsampled.nii and simulated_track.xml is provided in the [link](https://drive.google.com/drive/folders/1FLTv3RZkF2wqGa7E83rJwQs0h9iPH6uw?usp=sharing). Could you take a look if I did something noticeably wrong again? Command: `(cellfinder) E:\idisco\PV_0007>cellfinder_cell_standard --cells "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/simulated_track.xml" --transformation "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/registration/control_point_file.nii" --ref "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/registration/donwsampled.nii" -o "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/registration/" -x 0.01 -y 0.01 -z 0.01` Error message appeared: ``` 2020-05-10 00:51:35.875226: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll Traceback (most recent call last): File "e:\miniconda3\envs\cellfinder\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "e:\miniconda3\envs\cellfinder\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "E:\miniconda3\envs\cellfinder\Scripts\cellfinder_cell_standard.exe\__main__.py", line 7, in <module> File "e:\miniconda3\envs\cellfinder\lib\site-packages\cellfinder\standard_space\cells_to_standard_space.py", line 213, in main tools.start_logging( AttributeError: module 'cellfinder.tools.tools' has no attribute 'start_logging' ```
1.0
[BUG]error message with reverse transformation in 'cellfinder_cell_standard' - Hi Adam, I'm trying to reverse transform simulated electrode track coordinates to atlas space, with 'cellfinder_cell_standard' command on windows 10. Amap registration was successful previously. 190917_488_hor_coronal_10um.nii (raw images), downsampled.nii and simulated_track.xml is provided in the [link](https://drive.google.com/drive/folders/1FLTv3RZkF2wqGa7E83rJwQs0h9iPH6uw?usp=sharing). Could you take a look if I did something noticeably wrong again? Command: `(cellfinder) E:\idisco\PV_0007>cellfinder_cell_standard --cells "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/simulated_track.xml" --transformation "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/registration/control_point_file.nii" --ref "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/registration/donwsampled.nii" -o "E:/idisco/PV_0007/190917_488_hor_11-37-36_amap/registration/" -x 0.01 -y 0.01 -z 0.01` Error message appeared: ``` 2020-05-10 00:51:35.875226: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll Traceback (most recent call last): File "e:\miniconda3\envs\cellfinder\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "e:\miniconda3\envs\cellfinder\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "E:\miniconda3\envs\cellfinder\Scripts\cellfinder_cell_standard.exe\__main__.py", line 7, in <module> File "e:\miniconda3\envs\cellfinder\lib\site-packages\cellfinder\standard_space\cells_to_standard_space.py", line 213, in main tools.start_logging( AttributeError: module 'cellfinder.tools.tools' has no attribute 'start_logging' ```
priority
error message with reverse transformation in cellfinder cell standard hi adam i m trying to reverse transform simulated electrode track coordinates to atlas space with cellfinder cell standard command on windows amap registration was successful previously hor coronal nii raw images downsampled nii and simulated track xml is provided in the could you take a look if i did something noticeably wrong again command cellfinder e idisco pv cellfinder cell standard cells e idisco pv hor amap simulated track xml transformation e idisco pv hor amap registration control point file nii ref e idisco pv hor amap registration donwsampled nii o e idisco pv hor amap registration x y z error message appeared i tensorflow stream executor platform default dso loader cc successfully opened dynamic library dll traceback most recent call last file e envs cellfinder lib runpy py line in run module as main main mod spec file e envs cellfinder lib runpy py line in run code exec code run globals file e envs cellfinder scripts cellfinder cell standard exe main py line in file e envs cellfinder lib site packages cellfinder standard space cells to standard space py line in main tools start logging attributeerror module cellfinder tools tools has no attribute start logging
1
793,329
27,990,935,518
IssuesEvent
2023-03-27 03:37:02
AY2223S2-CS2103T-W14-2/tp
https://api.github.com/repos/AY2223S2-CS2103T-W14-2/tp
closed
Implement Timetabling for Meet Command
priority.High type.Task
## Russell's TODO - [x] Scheduler's Logic - [x] New Timetable, Module, Lesson classes - [x] Integrate with Meet Command ## Kenny's TODO - [x] Parse Module Information - [x] Modify TagCommand and UnTag Command to attach the Class Slots (LEC3, T12) to each person. - [x] Map each class slot to a particular venue/location. > Modify meet command such that each location suggestion comes equipped with a few suggested timings (now, meet command suggests 10 locations (AMK Hub, NEX,…). What is required is (AMK Hub @ Monday 9am, NEX @ Thursday 6pm,…) ## After the above are done, Integrator's TODO - [x] Integrate functionality of Meet with Tag and Untag Command - [x] Set meeting spot and time on UI.
1.0
Implement Timetabling for Meet Command - ## Russell's TODO - [x] Scheduler's Logic - [x] New Timetable, Module, Lesson classes - [x] Integrate with Meet Command ## Kenny's TODO - [x] Parse Module Information - [x] Modify TagCommand and UnTag Command to attach the Class Slots (LEC3, T12) to each person. - [x] Map each class slot to a particular venue/location. > Modify meet command such that each location suggestion comes equipped with a few suggested timings (now, meet command suggests 10 locations (AMK Hub, NEX,…). What is required is (AMK Hub @ Monday 9am, NEX @ Thursday 6pm,…) ## After the above are done, Integrator's TODO - [x] Integrate functionality of Meet with Tag and Untag Command - [x] Set meeting spot and time on UI.
priority
implement timetabling for meet command russell s todo scheduler s logic new timetable module lesson classes integrate with meet command kenny s todo parse module information modify tagcommand and untag command to attach the class slots to each person map each class slot to a particular venue location modify meet command such that each location suggestion comes equipped with a few suggested timings now meet command suggests locations amk hub nex … what is required is amk hub monday nex thursday … after the above are done integrator s todo integrate functionality of meet with tag and untag command set meeting spot and time on ui
1
162,848
6,176,790,826
IssuesEvent
2017-07-01 17:04:06
TechnionYP5777/Bugquery
https://api.github.com/repos/TechnionYP5777/Bugquery
closed
Set up someone to test the developer installation
High Priority in progress Project Managment Reality Game
Related to #166 Needs to be done before Sunday.
1.0
Set up someone to test the developer installation - Related to #166 Needs to be done before Sunday.
priority
set up someone to test the developer installation related to needs to be done before sunday
1
814,091
30,486,783,055
IssuesEvent
2023-07-18 03:20:30
LLNL/axom
https://api.github.com/repos/LLNL/axom
closed
Add support in sample-based shaper for shaping 3D meshes using 2D contours
enhancement Quest User Request Klee high priority
Quest's sample-based shaper currently supports shaping 2D regions bounded by `c2c` contours onto a 2D mesh and 3D regions bounded by STL triangle meshes onto a 3D mesh. We'd also like to support shaping onto 3D computational mesh using 2D contours, e.g. for surfaces of revolution. One general approach might be to allow the user to specify a lambda for projecting/transforming 3D query points to 2D.
1.0
Add support in sample-based shaper for shaping 3D meshes using 2D contours - Quest's sample-based shaper currently supports shaping 2D regions bounded by `c2c` contours onto a 2D mesh and 3D regions bounded by STL triangle meshes onto a 3D mesh. We'd also like to support shaping onto 3D computational mesh using 2D contours, e.g. for surfaces of revolution. One general approach might be to allow the user to specify a lambda for projecting/transforming 3D query points to 2D.
priority
add support in sample based shaper for shaping meshes using contours quest s sample based shaper currently supports shaping regions bounded by contours onto a mesh and regions bounded by stl triangle meshes onto a mesh we d also like to support shaping onto computational mesh using contours e g for surfaces of revolution one general approach might be to allow the user to specify a lambda for projecting transforming query points to
1
343,379
10,329,026,446
IssuesEvent
2019-09-02 11:04:00
geosolutions-it/geonode
https://api.github.com/repos/geosolutions-it/geonode
opened
Improve calendar heat map
Priority: High analytics enhancement
Improve calendar by using metric_data endpoint with event type params and add calendar widget in detalis also
1.0
Improve calendar heat map - Improve calendar by using metric_data endpoint with event type params and add calendar widget in detalis also
priority
improve calendar heat map improve calendar by using metric data endpoint with event type params and add calendar widget in detalis also
1
175,155
6,547,662,187
IssuesEvent
2017-09-04 15:48:38
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
closed
[Studio] Windows Inital page state does not match Linux/Osx
bug Priority: High
Steps: 1. Download and use VM with windows 10 x64 from here https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/ of phisical windows machine 2. Download and install the latest java 8 JDK from oracle page http://www.oracle.com/technetwork/java/javase/downloads/index.html 3. Set the JAVA_HOME environment variable to point the jdk just installed. 4. Download and unzip the crafter authoring bundle from http://downloads.craftersoftware.com/craftercms/community/3.0.0/crafter-cms-authoring.zip 5. Start it by running CRAFTER_HOME/bin/startup.sh (**as an administrator the first time**) 5.1 Ignore the mongodb issue 6. Once it's up, go to http://localhost:8080/studio 7. Log in and create a site using the editorial blueprint. 8. Once the site is created, the state of the objects is not the same as if it were OSX or Linux, since in those OS all pages/assets/components are in a live state, while on windows they are not.(see screenshots) Windows:![windowsstateofpages](https://user-images.githubusercontent.com/574641/29288772-40fce9e0-80f7-11e7-8ab7-558053b37b76.png) Linux:![linuxpagestate](https://user-images.githubusercontent.com/574641/29288784-4efba8b0-80f7-11e7-8721-4d810a8127f6.png) Logs. [logs.zip](https://github.com/craftercms/craftercms/files/1223264/logs.zip)
1.0
[Studio] Windows Inital page state does not match Linux/Osx - Steps: 1. Download and use VM with windows 10 x64 from here https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/ of phisical windows machine 2. Download and install the latest java 8 JDK from oracle page http://www.oracle.com/technetwork/java/javase/downloads/index.html 3. Set the JAVA_HOME environment variable to point the jdk just installed. 4. Download and unzip the crafter authoring bundle from http://downloads.craftersoftware.com/craftercms/community/3.0.0/crafter-cms-authoring.zip 5. Start it by running CRAFTER_HOME/bin/startup.sh (**as an administrator the first time**) 5.1 Ignore the mongodb issue 6. Once it's up, go to http://localhost:8080/studio 7. Log in and create a site using the editorial blueprint. 8. Once the site is created, the state of the objects is not the same as if it were OSX or Linux, since in those OS all pages/assets/components are in a live state, while on windows they are not.(see screenshots) Windows:![windowsstateofpages](https://user-images.githubusercontent.com/574641/29288772-40fce9e0-80f7-11e7-8ab7-558053b37b76.png) Linux:![linuxpagestate](https://user-images.githubusercontent.com/574641/29288784-4efba8b0-80f7-11e7-8721-4d810a8127f6.png) Logs. [logs.zip](https://github.com/craftercms/craftercms/files/1223264/logs.zip)
priority
windows inital page state does not match linux osx steps download and use vm with windows from here of phisical windows machine download and install the latest java jdk from oracle page set the java home environment variable to point the jdk just installed download and unzip the crafter authoring bundle from start it by running crafter home bin startup sh as an administrator the first time ignore the mongodb issue once it s up go to log in and create a site using the editorial blueprint once the site is created the state of the objects is not the same as if it were osx or linux since in those os all pages assets components are in a live state while on windows they are not see screenshots windows linux logs
1
196,262
6,926,320,814
IssuesEvent
2017-11-30 18:45:55
ubc/compair
https://api.github.com/repos/ubc/compair
closed
Add pair selection algorithm based on multidimentional scaling
back end enhancement high priority instructor request
Calculate minimum delta between criteria scores when multiple criteria are used (pairing algorithm, need to keep criteria weight in mind)
1.0
Add pair selection algorithm based on multidimentional scaling - Calculate minimum delta between criteria scores when multiple criteria are used (pairing algorithm, need to keep criteria weight in mind)
priority
add pair selection algorithm based on multidimentional scaling calculate minimum delta between criteria scores when multiple criteria are used pairing algorithm need to keep criteria weight in mind
1
191,045
6,825,164,200
IssuesEvent
2017-11-08 09:33:05
ballerinalang/composer
https://api.github.com/repos/ballerinalang/composer
closed
Finally keyword is not shown as a keyword in try-catch
Priority/High Severity/Major Type/Bug
Pack 0.93 13/09 Finally keyword is not shown as a keyword as it is not colored ![try1](https://user-images.githubusercontent.com/1845370/30416433-c949bce6-9949-11e7-9d2b-28effef3396f.png)
1.0
Finally keyword is not shown as a keyword in try-catch - Pack 0.93 13/09 Finally keyword is not shown as a keyword as it is not colored ![try1](https://user-images.githubusercontent.com/1845370/30416433-c949bce6-9949-11e7-9d2b-28effef3396f.png)
priority
finally keyword is not shown as a keyword in try catch pack finally keyword is not shown as a keyword as it is not colored
1
179,773
6,628,619,623
IssuesEvent
2017-09-23 20:20:52
inaturalist/iNaturalistAndroid
https://api.github.com/repos/inaturalist/iNaturalistAndroid
closed
Compare Tool V1.5
enhancement High Priority
Ken-ichi and I discussed revisions for the next or near future release of the Taxon Compare Tool. See below... ![image](https://user-images.githubusercontent.com/11527994/29850486-03448c6e-8ce3-11e7-9290-6a56512d8efc.png) # Landscape View * Landscape view content arrangement is based on how the user turned the device to arrive at landscape view. ![image](https://user-images.githubusercontent.com/11527994/29850532-4afd7e6c-8ce3-11e7-8e9e-15b2fab6dab2.png) This isn't ready yet. Will let you know when it is! I just need to include some more explanation and specs.
1.0
Compare Tool V1.5 - Ken-ichi and I discussed revisions for the next or near future release of the Taxon Compare Tool. See below... ![image](https://user-images.githubusercontent.com/11527994/29850486-03448c6e-8ce3-11e7-9290-6a56512d8efc.png) # Landscape View * Landscape view content arrangement is based on how the user turned the device to arrive at landscape view. ![image](https://user-images.githubusercontent.com/11527994/29850532-4afd7e6c-8ce3-11e7-8e9e-15b2fab6dab2.png) This isn't ready yet. Will let you know when it is! I just need to include some more explanation and specs.
priority
compare tool ken ichi and i discussed revisions for the next or near future release of the taxon compare tool see below landscape view landscape view content arrangement is based on how the user turned the device to arrive at landscape view this isn t ready yet will let you know when it is i just need to include some more explanation and specs
1
227,241
7,528,160,129
IssuesEvent
2018-04-13 19:41:42
eustasy/gorgon
https://api.github.com/repos/eustasy/gorgon
closed
Include reactions in karma calculation
Language: PHP Priority: High Status: Confirmed Type: Bug
In `/api/get_karma_for_issue.php` we give comments 1 karma each, but we should include reactions in that calculation, seeing as GitHub discourages "+1" or "me too" comments. There is an api for that, but it might be a totally separate query. 1. https://developer.github.com/changes/2016-05-12-reactions-api-preview/ 2. https://developer.github.com/v3/reactions/#list-reactions-for-an-issue
1.0
Include reactions in karma calculation - In `/api/get_karma_for_issue.php` we give comments 1 karma each, but we should include reactions in that calculation, seeing as GitHub discourages "+1" or "me too" comments. There is an api for that, but it might be a totally separate query. 1. https://developer.github.com/changes/2016-05-12-reactions-api-preview/ 2. https://developer.github.com/v3/reactions/#list-reactions-for-an-issue
priority
include reactions in karma calculation in api get karma for issue php we give comments karma each but we should include reactions in that calculation seeing as github discourages or me too comments there is an api for that but it might be a totally separate query
1
486,920
14,016,551,395
IssuesEvent
2020-10-29 14:39:17
stevencohn/OneMore
https://api.github.com/repos/stevencohn/OneMore
closed
Sort function is not working
bug done high priority
Hi, Great product !! Problem: sorting 'Section in this notebook' Using Office 2016 ProPlus Onenote with latest OneMore version 3.3 When trying to sort 'Section in this notebook' in my big onenote, when clicking OK button nothing happens, then its onenote hanging for a bit of time (like 100% cpu), like he is processing or doing something, but nothings happens., suddenly the process stops and I can continue to work further in Onenote like nothings happens. Can you check the code? PS: Sorting pages or notebooks is working fine., it's only sorting 'Section in this notebook' is not working. Regards, Michel
1.0
Sort function is not working - Hi, Great product !! Problem: sorting 'Section in this notebook' Using Office 2016 ProPlus Onenote with latest OneMore version 3.3 When trying to sort 'Section in this notebook' in my big onenote, when clicking OK button nothing happens, then its onenote hanging for a bit of time (like 100% cpu), like he is processing or doing something, but nothings happens., suddenly the process stops and I can continue to work further in Onenote like nothings happens. Can you check the code? PS: Sorting pages or notebooks is working fine., it's only sorting 'Section in this notebook' is not working. Regards, Michel
priority
sort function is not working hi great product problem sorting section in this notebook using office proplus onenote with latest onemore version when trying to sort section in this notebook in my big onenote when clicking ok button nothing happens then its onenote hanging for a bit of time like cpu like he is processing or doing something but nothings happens suddenly the process stops and i can continue to work further in onenote like nothings happens can you check the code ps sorting pages or notebooks is working fine it s only sorting section in this notebook is not working regards michel
1
678,661
23,205,980,180
IssuesEvent
2022-08-02 05:22:44
phetsims/axon
https://api.github.com/repos/phetsims/axon
closed
Can we get rid of getListenerCount?
priority:2-high dev:typescript
From https://github.com/phetsims/axon/issues/402, @marlitas and I would like to remove getListenerCount from the Emitter and Property interfaces. Current usages seem to only be in tests. Can we get rid of the tests? If not, perhaps subclass and make that method public?
1.0
Can we get rid of getListenerCount? - From https://github.com/phetsims/axon/issues/402, @marlitas and I would like to remove getListenerCount from the Emitter and Property interfaces. Current usages seem to only be in tests. Can we get rid of the tests? If not, perhaps subclass and make that method public?
priority
can we get rid of getlistenercount from marlitas and i would like to remove getlistenercount from the emitter and property interfaces current usages seem to only be in tests can we get rid of the tests if not perhaps subclass and make that method public
1
312,952
9,555,389,589
IssuesEvent
2019-05-03 02:59:57
gardners/surveysystem
https://api.github.com/repos/gardners/surveysystem
closed
concurrency problem in save_session
Priority: HIGH backend bug
case if multiple next_questions. ``` REQUEST ERROR Error: [400] Bad Request: save_session() failedsrc/sessions.c:663:save_session(): rename('/var/surveysystem/surveysystem/backend/sessions/a894/write.a8948803-0000-0000-1d12-0fdeb399d053','/var/surveysystem/surveysystem/backend/sessions/a894/a8948803-0000-0000-1d12-0fdeb399d053') failed when updating file for session 'a8948803-0000-0000-1d12-0fdeb399d053' (errno=2)<br> ``` The frontend and the [player](https://github.com/gardners/surveysystem/tree/master/player) send mutliple http requests to /updateanswer parallel (not sequential) I assume the issue is that multiple requests try to access `write.a8948803-0000-0000-1d12-0fdeb399d053` at the same time. In this regard: Lighttpd fastcgi spawns 4 or more parallel processes who might also try to access this file independently at the same time The error occurs on my local install when running the player (who is rapidly answering questions) and where there is next to no network lag. This is definitely a stress scenario but not far from reality either. It does not occur when running the player with the same on the server survey
1.0
concurrency problem in save_session - case if multiple next_questions. ``` REQUEST ERROR Error: [400] Bad Request: save_session() failedsrc/sessions.c:663:save_session(): rename('/var/surveysystem/surveysystem/backend/sessions/a894/write.a8948803-0000-0000-1d12-0fdeb399d053','/var/surveysystem/surveysystem/backend/sessions/a894/a8948803-0000-0000-1d12-0fdeb399d053') failed when updating file for session 'a8948803-0000-0000-1d12-0fdeb399d053' (errno=2)<br> ``` The frontend and the [player](https://github.com/gardners/surveysystem/tree/master/player) send mutliple http requests to /updateanswer parallel (not sequential) I assume the issue is that multiple requests try to access `write.a8948803-0000-0000-1d12-0fdeb399d053` at the same time. In this regard: Lighttpd fastcgi spawns 4 or more parallel processes who might also try to access this file independently at the same time The error occurs on my local install when running the player (who is rapidly answering questions) and where there is next to no network lag. This is definitely a stress scenario but not far from reality either. It does not occur when running the player with the same on the server survey
priority
concurrency problem in save session case if multiple next questions request error error bad request save session failedsrc sessions c save session rename var surveysystem surveysystem backend sessions write var surveysystem surveysystem backend sessions failed when updating file for session errno the frontend and the send mutliple http requests to updateanswer parallel not sequential i assume the issue is that multiple requests try to access write at the same time in this regard lighttpd fastcgi spawns or more parallel processes who might also try to access this file independently at the same time the error occurs on my local install when running the player who is rapidly answering questions and where there is next to no network lag this is definitely a stress scenario but not far from reality either it does not occur when running the player with the same on the server survey
1
471,422
13,566,383,532
IssuesEvent
2020-09-18 13:12:51
w3c/did-core
https://api.github.com/repos/w3c/did-core
closed
Syntactially differentiate data about the DID versus application data
discuss high priority metadata pending close
DID applications should be able represent data in whatever manner they choose, whereas the representations of data in a DID document about the DID itself should be specified by the DID standard. To meet these dual requirements, these two classes of data must be syntactically differentiated. We need to evolve the DID document syntax to syntactically differentiate between data about the DID versus data for DID applications that happens to be co-resident in the DID document. The former is the domain of the DID standard. The latter is not. Note that this differentiation was initially discussed by @dlongley and @selfissued in #67.
1.0
Syntactially differentiate data about the DID versus application data - DID applications should be able represent data in whatever manner they choose, whereas the representations of data in a DID document about the DID itself should be specified by the DID standard. To meet these dual requirements, these two classes of data must be syntactically differentiated. We need to evolve the DID document syntax to syntactically differentiate between data about the DID versus data for DID applications that happens to be co-resident in the DID document. The former is the domain of the DID standard. The latter is not. Note that this differentiation was initially discussed by @dlongley and @selfissued in #67.
priority
syntactially differentiate data about the did versus application data did applications should be able represent data in whatever manner they choose whereas the representations of data in a did document about the did itself should be specified by the did standard to meet these dual requirements these two classes of data must be syntactically differentiated we need to evolve the did document syntax to syntactically differentiate between data about the did versus data for did applications that happens to be co resident in the did document the former is the domain of the did standard the latter is not note that this differentiation was initially discussed by dlongley and selfissued in
1
455,439
13,126,831,010
IssuesEvent
2020-08-06 09:15:51
The-Codin-Hole/HotWired-Bot
https://api.github.com/repos/The-Codin-Hole/HotWired-Bot
closed
Cant Launch Bot From start.py, 'module' object is not callable
priority: 1 - high type: bug
[`start.py`](https://github.com/The-Codin-Hole/HotWired-Bot/blob/f2be6e60742bcb387e80fe40131fc8d0a5f8216a/start.py) file doesn't start the bot properly and raises an exception: ``` Traceback (most recent call last): File "G:\New Downloads\HotWired-Bot\start.py", line 5, in <module> main() TypeError: 'module' object is not callable ```
1.0
Cant Launch Bot From start.py, 'module' object is not callable - [`start.py`](https://github.com/The-Codin-Hole/HotWired-Bot/blob/f2be6e60742bcb387e80fe40131fc8d0a5f8216a/start.py) file doesn't start the bot properly and raises an exception: ``` Traceback (most recent call last): File "G:\New Downloads\HotWired-Bot\start.py", line 5, in <module> main() TypeError: 'module' object is not callable ```
priority
cant launch bot from start py module object is not callable file doesn t start the bot properly and raises an exception traceback most recent call last file g new downloads hotwired bot start py line in main typeerror module object is not callable
1
142,348
5,474,366,938
IssuesEvent
2017-03-11 00:32:24
RobotLocomotion/drake
https://api.github.com/repos/RobotLocomotion/drake
closed
Maliput village demos
priority: high team: cars type: feature request
The famous "Maliput village demos" are on a feature branch on their way to master. This issue is to track their completion. - First PR with prep work: #5032 - Second PR with actual demos: TBD
1.0
Maliput village demos - The famous "Maliput village demos" are on a feature branch on their way to master. This issue is to track their completion. - First PR with prep work: #5032 - Second PR with actual demos: TBD
priority
maliput village demos the famous maliput village demos are on a feature branch on their way to master this issue is to track their completion first pr with prep work second pr with actual demos tbd
1
465,859
13,393,834,986
IssuesEvent
2020-09-03 05:21:33
onaio/reveal-frontend
https://api.github.com/repos/onaio/reveal-frontend
opened
RVL-1126 - Jurisdiction names in jurisdiction metadata has metadata appended to it
Priority: High Reveal-DSME
- [ ] On the Namibia Production, when a user download the csv file from the "Download Jurisdiction Metadata" section, the jurisdiction names come with ad appended "metadata" addition See screenshot below. ![image](https://user-images.githubusercontent.com/5908630/92074295-68709800-edbe-11ea-9618-ab8cefdba082.png)
1.0
RVL-1126 - Jurisdiction names in jurisdiction metadata has metadata appended to it - - [ ] On the Namibia Production, when a user download the csv file from the "Download Jurisdiction Metadata" section, the jurisdiction names come with ad appended "metadata" addition See screenshot below. ![image](https://user-images.githubusercontent.com/5908630/92074295-68709800-edbe-11ea-9618-ab8cefdba082.png)
priority
rvl jurisdiction names in jurisdiction metadata has metadata appended to it on the namibia production when a user download the csv file from the download jurisdiction metadata section the jurisdiction names come with ad appended metadata addition see screenshot below
1
315,185
9,607,547,818
IssuesEvent
2019-05-11 19:53:16
bounswe/bounswe2019group8
https://api.github.com/repos/bounswe/bounswe2019group8
closed
Research for Assignment 7
Effort: Medium Group work Priority: High Status: In Progress Type: Research
**Actions:** 1. We should make a research on API, git versioning system, RESTful API. One of the resource is the examples shared by professor on Piazza. **Deadline:** 18.04.2019 23.59
1.0
Research for Assignment 7 - **Actions:** 1. We should make a research on API, git versioning system, RESTful API. One of the resource is the examples shared by professor on Piazza. **Deadline:** 18.04.2019 23.59
priority
research for assignment actions we should make a research on api git versioning system restful api one of the resource is the examples shared by professor on piazza deadline
1
166,062
6,290,426,187
IssuesEvent
2017-07-19 21:29:16
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
closed
[social] API parameters marked as optional are required
bug Priority: High
Calling `GET api/3/comments/moderation/:status` without parameters `sortBy` & `sortOrder` causes an exception: ``` java.lang.NullPointerException: null at org.craftercms.commons.mongo.AbstractJongoRepository.createSortQuery(AbstractJongoRepository.java:616) ~[crafter-commons-mongo-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.craftercms.social.repositories.ugc.impl.UGCRepositoryImpl.findByModerationStatus(UGCRepositoryImpl.java:309) ~[classes/:3.0.0-SNAPSHOT] at org.craftercms.social.services.social.impl.SocialServicesImpl.findByModerationStatus(SocialServicesImpl.java:166) ~[classes/:3.0.0-SNAPSHOT] ```
1.0
[social] API parameters marked as optional are required - Calling `GET api/3/comments/moderation/:status` without parameters `sortBy` & `sortOrder` causes an exception: ``` java.lang.NullPointerException: null at org.craftercms.commons.mongo.AbstractJongoRepository.createSortQuery(AbstractJongoRepository.java:616) ~[crafter-commons-mongo-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.craftercms.social.repositories.ugc.impl.UGCRepositoryImpl.findByModerationStatus(UGCRepositoryImpl.java:309) ~[classes/:3.0.0-SNAPSHOT] at org.craftercms.social.services.social.impl.SocialServicesImpl.findByModerationStatus(SocialServicesImpl.java:166) ~[classes/:3.0.0-SNAPSHOT] ```
priority
api parameters marked as optional are required calling get api comments moderation status without parameters sortby sortorder causes an exception java lang nullpointerexception null at org craftercms commons mongo abstractjongorepository createsortquery abstractjongorepository java at org craftercms social repositories ugc impl ugcrepositoryimpl findbymoderationstatus ugcrepositoryimpl java at org craftercms social services social impl socialservicesimpl findbymoderationstatus socialservicesimpl java
1
373,302
11,038,889,860
IssuesEvent
2019-12-08 16:51:48
GluuFederation/oxd
https://api.github.com/repos/GluuFederation/oxd
closed
`StateService` keeps state and nonce in-memory which prevents HA of oxd.
High priority enhancement
`StateService` keeps state and nonce in-memory which prevents HA of oxd. Please move saving of nonce and state in that service to persistence.
1.0
`StateService` keeps state and nonce in-memory which prevents HA of oxd. - `StateService` keeps state and nonce in-memory which prevents HA of oxd. Please move saving of nonce and state in that service to persistence.
priority
stateservice keeps state and nonce in memory which prevents ha of oxd stateservice keeps state and nonce in memory which prevents ha of oxd please move saving of nonce and state in that service to persistence
1
146,715
5,626,403,702
IssuesEvent
2017-04-04 21:46:24
Tour-de-Force/btc-app
https://api.github.com/repos/Tour-de-Force/btc-app
closed
Filter icons on map screen?
Priority: High question
It appears that both buttons/icons on the map view are for filtering the service points? What was the intention of the two buttons again? Seems like we only need one for filtering.
1.0
Filter icons on map screen? - It appears that both buttons/icons on the map view are for filtering the service points? What was the intention of the two buttons again? Seems like we only need one for filtering.
priority
filter icons on map screen it appears that both buttons icons on the map view are for filtering the service points what was the intention of the two buttons again seems like we only need one for filtering
1
204,008
7,079,224,195
IssuesEvent
2018-01-10 08:43:10
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
opened
Access control: ambiguous statement in documentation
2.2.0 Priority/High Type/Docs
In [1] it states the following. Please include the API Manager version that is being addressed. `This feature is available and enabled by default in WSO2 API Manager - Update 2` [1] https://docs.wso2.com/display/AM2xx/Enabling+Access+Control+Support+for+API+Publisher
1.0
Access control: ambiguous statement in documentation - In [1] it states the following. Please include the API Manager version that is being addressed. `This feature is available and enabled by default in WSO2 API Manager - Update 2` [1] https://docs.wso2.com/display/AM2xx/Enabling+Access+Control+Support+for+API+Publisher
priority
access control ambiguous statement in documentation in it states the following please include the api manager version that is being addressed this feature is available and enabled by default in api manager update
1
390,328
11,542,101,894
IssuesEvent
2020-02-18 06:26:49
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
opened
[3.1.0-Beta] Resources page (swagger) does not load if the sandbox endpoints are not provided.
Priority/Highest Severity/Critical Type/Bug
### Description: If the api only have production endpoints, the resource page loading fails with an error. ``` [2020-02-18 11:53:05,736] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper. org.json.JSONException: JSONObject["sandbox_endpoints"] is not a JSONObject. at org.json.JSONObject.getJSONObject(JSONObject.java:577) ~[json_3.0.0.wso2v1.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.setPrimaryConfig_aroundBody70(OASParserUtil.java:1076) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.setPrimaryConfig(OASParserUtil.java:1068) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateOASConfigForEndpoints_aroundBody62(OASParserUtil.java:954) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateOASConfigForEndpoints(OASParserUtil.java:935) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OAS3Parser.getOASDefinitionForPublisher_aroundBody36(OAS3Parser.java:653) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OAS3Parser.getOASDefinitionForPublisher(OAS3Parser.java:607) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl.apisApiIdSwaggerGet(ApisApiServiceImpl.java:2799) ~[classes/:?] at org.wso2.carbon.apimgt.rest.api.publisher.v1.ApisApi.apisApiIdSwaggerGet(ApisApi.java:760) ~[classes/:?] at sun.reflect.GeneratedMethodAccessor531.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_211] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_211] at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:193) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:103) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:216) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:301) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:225) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) [tomcat-servlet-api_9.0.22.wso2v1.jar:?] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:276) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat_9.0.22.wso2v1.jar:?] at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:86) [org.wso2.carbon.identity.context.rewrite.valve_1.3.19.jar:?] at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) [org.wso2.carbon.identity.authz.valve_1.3.19.jar:?] at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:74) [org.wso2.carbon.identity.auth.valve_1.3.19.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678) [tomcat_9.0.22.wso2v1.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:119) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat_9.0.22.wso2v1.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_211] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_211] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat_9.0.22.wso2v1.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_211] ``` ### Steps to reproduce: ### Affected Product Version: <!-- Members can use Affected/*** labels --> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
[3.1.0-Beta] Resources page (swagger) does not load if the sandbox endpoints are not provided. - ### Description: If the api only have production endpoints, the resource page loading fails with an error. ``` [2020-02-18 11:53:05,736] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper. org.json.JSONException: JSONObject["sandbox_endpoints"] is not a JSONObject. at org.json.JSONObject.getJSONObject(JSONObject.java:577) ~[json_3.0.0.wso2v1.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.setPrimaryConfig_aroundBody70(OASParserUtil.java:1076) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.setPrimaryConfig(OASParserUtil.java:1068) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateOASConfigForEndpoints_aroundBody62(OASParserUtil.java:954) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateOASConfigForEndpoints(OASParserUtil.java:935) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OAS3Parser.getOASDefinitionForPublisher_aroundBody36(OAS3Parser.java:653) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.impl.definitions.OAS3Parser.getOASDefinitionForPublisher(OAS3Parser.java:607) ~[org.wso2.carbon.apimgt.impl_6.6.66.jar:?] at org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl.apisApiIdSwaggerGet(ApisApiServiceImpl.java:2799) ~[classes/:?] at org.wso2.carbon.apimgt.rest.api.publisher.v1.ApisApi.apisApiIdSwaggerGet(ApisApi.java:760) ~[classes/:?] at sun.reflect.GeneratedMethodAccessor531.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_211] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_211] at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:193) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:103) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:216) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:301) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:225) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) [tomcat-servlet-api_9.0.22.wso2v1.jar:?] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:276) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat_9.0.22.wso2v1.jar:?] at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:86) [org.wso2.carbon.identity.context.rewrite.valve_1.3.19.jar:?] at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) [org.wso2.carbon.identity.authz.valve_1.3.19.jar:?] at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:74) [org.wso2.carbon.identity.auth.valve_1.3.19.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678) [tomcat_9.0.22.wso2v1.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:119) [org.wso2.carbon.tomcat.ext_4.6.0.beta.jar:?] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587) [tomcat_9.0.22.wso2v1.jar:?] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat_9.0.22.wso2v1.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_211] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_211] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat_9.0.22.wso2v1.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_211] ``` ### Steps to reproduce: ### Affected Product Version: <!-- Members can use Affected/*** labels --> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
priority
resources page swagger does not load if the sandbox endpoints are not provided description if the api only have production endpoints the resource page loading fails with an error error globalthrowablemapper an unknown exception has been captured by the global exception mapper org json jsonexception jsonobject is not a jsonobject at org json jsonobject getjsonobject jsonobject java at org carbon apimgt impl definitions oasparserutil setprimaryconfig oasparserutil java at org carbon apimgt impl definitions oasparserutil setprimaryconfig oasparserutil java at org carbon apimgt impl definitions oasparserutil generateoasconfigforendpoints oasparserutil java at org carbon apimgt impl definitions oasparserutil generateoasconfigforendpoints oasparserutil java at org carbon apimgt impl definitions getoasdefinitionforpublisher java at org carbon apimgt impl definitions getoasdefinitionforpublisher java at org carbon apimgt rest api publisher impl apisapiserviceimpl apisapiidswaggerget apisapiserviceimpl java at org carbon apimgt rest api publisher apisapi apisapiidswaggerget apisapi java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache cxf service invoker abstractinvoker performinvocation abstractinvoker java at org apache cxf service invoker abstractinvoker invoke abstractinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf interceptor serviceinvokerinterceptor run serviceinvokerinterceptor java at org apache cxf interceptor serviceinvokerinterceptor handlemessage serviceinvokerinterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java at org apache cxf transport servlet abstracthttpservlet doget abstracthttpservlet java at javax servlet http httpservlet service httpservlet java at org apache cxf transport servlet abstracthttpservlet service abstracthttpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java steps to reproduce affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees
1
616,482
19,303,629,317
IssuesEvent
2021-12-13 09:12:55
ita-social-projects/TeachUA
https://api.github.com/repos/ita-social-projects/TeachUA
opened
Create Challenge/Task controller
Backend Priority: High Task API
Develop Controller, service methods to fill tables of DB. Develop classes Taskinfo, TaskinfoRepository, Read from file as JSON get filename from URL parameters
1.0
Create Challenge/Task controller - Develop Controller, service methods to fill tables of DB. Develop classes Taskinfo, TaskinfoRepository, Read from file as JSON get filename from URL parameters
priority
create challenge task controller develop controller service methods to fill tables of db develop classes taskinfo taskinforepository read from file as json get filename from url parameters
1
378,726
11,206,813,407
IssuesEvent
2020-01-06 00:11:31
ChainSafe/gossamer
https://api.github.com/repos/ChainSafe/gossamer
closed
Change types.BlockHeader to unexport hash field and add getter/setter
Priority: 2 - High scale
currently, types.Block uses types.BlockHeaderWithHash. it should be using types.BlockHeader so that scale decoding from BABE works, however, the block tree requires a hash field. we should change types.BlockHeader to have an unexported hash field and add a Hash() function on it that sets the hash if nil, otherwise it gets the hash.
1.0
Change types.BlockHeader to unexport hash field and add getter/setter - currently, types.Block uses types.BlockHeaderWithHash. it should be using types.BlockHeader so that scale decoding from BABE works, however, the block tree requires a hash field. we should change types.BlockHeader to have an unexported hash field and add a Hash() function on it that sets the hash if nil, otherwise it gets the hash.
priority
change types blockheader to unexport hash field and add getter setter currently types block uses types blockheaderwithhash it should be using types blockheader so that scale decoding from babe works however the block tree requires a hash field we should change types blockheader to have an unexported hash field and add a hash function on it that sets the hash if nil otherwise it gets the hash
1
650,512
21,394,241,451
IssuesEvent
2022-04-21 10:01:16
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
Should not allow to update the multi attribute configs when there is white spaces among the claim list
Priority/High Severity/Major bug 5.12.0-bug-fixing Affected-5.12.0 QA-Reported Mutli-Attribute-Login
**How to reproduce:** 1. Enable mutiattribute login for claims mobile and telephone 2. In the claim list entering field eneter the list without commas and having whitespaces among the list 3. It will allow to succesfully update the configs with managemnet console and console 4. Access myaccount 5. Try to login with configured login identifier (mobile) 6. **Actual behavior:** It wont allow to login to myaccount **Expected Behavior** There should be proper limitations in the UI when user trying to update the claim list by having spaces in between the claim list. Should not allow the user to save the list if its not according to expected format. **Environment information** (_Please complete the following information; remove any unnecessary fields_) **:** IS 5.12.0 alpha 16 h2/default ![1](https://user-images.githubusercontent.com/31848014/159214213-5d46ee1f-c389-4771-9448-787b5bbf9088.png) ![2](https://user-images.githubusercontent.com/31848014/159214217-ad6505f2-edce-4b8b-81f4-8f3e662ca04c.png) https://user-images.githubusercontent.com/31848014/159214255-a8f0030c-4ad7-46b2-92b0-e910307b03e6.mp4
1.0
Should not allow to update the multi attribute configs when there is white spaces among the claim list - **How to reproduce:** 1. Enable mutiattribute login for claims mobile and telephone 2. In the claim list entering field eneter the list without commas and having whitespaces among the list 3. It will allow to succesfully update the configs with managemnet console and console 4. Access myaccount 5. Try to login with configured login identifier (mobile) 6. **Actual behavior:** It wont allow to login to myaccount **Expected Behavior** There should be proper limitations in the UI when user trying to update the claim list by having spaces in between the claim list. Should not allow the user to save the list if its not according to expected format. **Environment information** (_Please complete the following information; remove any unnecessary fields_) **:** IS 5.12.0 alpha 16 h2/default ![1](https://user-images.githubusercontent.com/31848014/159214213-5d46ee1f-c389-4771-9448-787b5bbf9088.png) ![2](https://user-images.githubusercontent.com/31848014/159214217-ad6505f2-edce-4b8b-81f4-8f3e662ca04c.png) https://user-images.githubusercontent.com/31848014/159214255-a8f0030c-4ad7-46b2-92b0-e910307b03e6.mp4
priority
should not allow to update the multi attribute configs when there is white spaces among the claim list how to reproduce enable mutiattribute login for claims mobile and telephone in the claim list entering field eneter the list without commas and having whitespaces among the list it will allow to succesfully update the configs with managemnet console and console access myaccount try to login with configured login identifier mobile actual behavior it wont allow to login to myaccount expected behavior there should be proper limitations in the ui when user trying to update the claim list by having spaces in between the claim list should not allow the user to save the list if its not according to expected format environment information please complete the following information remove any unnecessary fields is alpha default
1
747,663
26,094,916,096
IssuesEvent
2022-12-26 17:42:43
bounswe/bounswe2022group1
https://api.github.com/repos/bounswe/bounswe2022group1
closed
Profile Page Update
Type: Enhancement Priority: High Status: In Progress Backend
**Issue Description:** @kadirgokhann in the front-end team requested an update feature for the profile page to update the profile picture of the user. As the backend team we should implement a patch method to update Profile fields. **Tasks to Do:** - [x] implement a patch method for profile page *Task Deadline:* 26.12.2022- 18.00 *Final Situation:* I implemented a patch method for profileApiView in backend/app/views.py. Patch method takes two optional parameters "about_me" and "image". When any of the parameters are given, the relevant field is updated.
1.0
Profile Page Update - **Issue Description:** @kadirgokhann in the front-end team requested an update feature for the profile page to update the profile picture of the user. As the backend team we should implement a patch method to update Profile fields. **Tasks to Do:** - [x] implement a patch method for profile page *Task Deadline:* 26.12.2022- 18.00 *Final Situation:* I implemented a patch method for profileApiView in backend/app/views.py. Patch method takes two optional parameters "about_me" and "image". When any of the parameters are given, the relevant field is updated.
priority
profile page update issue description kadirgokhann in the front end team requested an update feature for the profile page to update the profile picture of the user as the backend team we should implement a patch method to update profile fields tasks to do implement a patch method for profile page task deadline final situation i implemented a patch method for profileapiview in backend app views py patch method takes two optional parameters about me and image when any of the parameters are given the relevant field is updated
1
400,886
11,782,250,553
IssuesEvent
2020-03-17 01:14:20
geopm/geopm
https://api.github.com/repos/geopm/geopm
closed
Cryptic error messages with debug builds of geopmread
bug bug-exposure-high bug-priority-low bug-quality-low
When running ```geopmread``` with a debug build, using the wrong domain or the wrong domain index for a given signal results in a very cryptic error message. The following cases result in the same error: 1. Using an index that is out-of-bounds for a given domain results in the MPIComm error. ```geopmread POWER_DRAM board_memory 2``` 2. Using a domain that is too granular for the given signal results in the MPIComm error. ```geopmread POWER_DRAM core 0``` ``` terminate called after throwing an instance of 'geopm::Exception' what(): <geopm> Invalid argument: PluginFactory::make_plugin(): name: "MPIComm" has not been previously registered: at geopm/src/PluginFactory.hpp:85 Aborted ``` With release builds, you see the proper error message: ``` $ geopmread POWER_DRAM core 0 Error: cannot read signal: <geopm> Invalid argument: PlatformIOImp::read_signal(): domain 2 is not valid for signal "ENERGY_DRAM": at geopm/src/PlatformIO.cpp:575 $ geopmread POWER_DRAM board_memory 2 Error: cannot read signal: <geopm> Invalid argument: PlatformIOImp::read_signal(): domain_idx is out of range: at geopm/src/PlatformIO.cpp:516 ```
1.0
Cryptic error messages with debug builds of geopmread - When running ```geopmread``` with a debug build, using the wrong domain or the wrong domain index for a given signal results in a very cryptic error message. The following cases result in the same error: 1. Using an index that is out-of-bounds for a given domain results in the MPIComm error. ```geopmread POWER_DRAM board_memory 2``` 2. Using a domain that is too granular for the given signal results in the MPIComm error. ```geopmread POWER_DRAM core 0``` ``` terminate called after throwing an instance of 'geopm::Exception' what(): <geopm> Invalid argument: PluginFactory::make_plugin(): name: "MPIComm" has not been previously registered: at geopm/src/PluginFactory.hpp:85 Aborted ``` With release builds, you see the proper error message: ``` $ geopmread POWER_DRAM core 0 Error: cannot read signal: <geopm> Invalid argument: PlatformIOImp::read_signal(): domain 2 is not valid for signal "ENERGY_DRAM": at geopm/src/PlatformIO.cpp:575 $ geopmread POWER_DRAM board_memory 2 Error: cannot read signal: <geopm> Invalid argument: PlatformIOImp::read_signal(): domain_idx is out of range: at geopm/src/PlatformIO.cpp:516 ```
priority
cryptic error messages with debug builds of geopmread when running geopmread with a debug build using the wrong domain or the wrong domain index for a given signal results in a very cryptic error message the following cases result in the same error using an index that is out of bounds for a given domain results in the mpicomm error geopmread power dram board memory using a domain that is too granular for the given signal results in the mpicomm error geopmread power dram core terminate called after throwing an instance of geopm exception what invalid argument pluginfactory make plugin name mpicomm has not been previously registered at geopm src pluginfactory hpp aborted with release builds you see the proper error message geopmread power dram core error cannot read signal invalid argument platformioimp read signal domain is not valid for signal energy dram at geopm src platformio cpp geopmread power dram board memory error cannot read signal invalid argument platformioimp read signal domain idx is out of range at geopm src platformio cpp
1
522,946
15,170,082,407
IssuesEvent
2021-02-12 22:26:03
cds-snc/covid-alert-server
https://api.github.com/repos/cds-snc/covid-alert-server
closed
Implement Microservice for collecting App Metrics
high priority
## Description We need a service that will collect metrics from the Covid Alert App this service will need to aggregate the Metrics for a specific period (TBD, but lets start by bucketing per day). These aggregate metrics will be stored in a new table in the existing Aurora MySQL Database. ## AC - An endpoint is created that allows for the submission of metrics from the App - Calls to this endpoint are only allowed by the app - The metrics are aggregated per day and are stored in a table in the existing Aurora MySQL Database - The new services are covered by automated tests - API Documentation has been updated - Infrastructure as Code is written for new Service - CI is updated for new service - Linting of new source code (see #418) - Snyk scanning of new dependencies - New Tests are running on PR - CNAME record pointing to metric lambda (https://github.com/cds-snc/covid-alert-server-staging-terraform/issues/81)
1.0
Implement Microservice for collecting App Metrics - ## Description We need a service that will collect metrics from the Covid Alert App this service will need to aggregate the Metrics for a specific period (TBD, but lets start by bucketing per day). These aggregate metrics will be stored in a new table in the existing Aurora MySQL Database. ## AC - An endpoint is created that allows for the submission of metrics from the App - Calls to this endpoint are only allowed by the app - The metrics are aggregated per day and are stored in a table in the existing Aurora MySQL Database - The new services are covered by automated tests - API Documentation has been updated - Infrastructure as Code is written for new Service - CI is updated for new service - Linting of new source code (see #418) - Snyk scanning of new dependencies - New Tests are running on PR - CNAME record pointing to metric lambda (https://github.com/cds-snc/covid-alert-server-staging-terraform/issues/81)
priority
implement microservice for collecting app metrics description we need a service that will collect metrics from the covid alert app this service will need to aggregate the metrics for a specific period tbd but lets start by bucketing per day these aggregate metrics will be stored in a new table in the existing aurora mysql database ac an endpoint is created that allows for the submission of metrics from the app calls to this endpoint are only allowed by the app the metrics are aggregated per day and are stored in a table in the existing aurora mysql database the new services are covered by automated tests api documentation has been updated infrastructure as code is written for new service ci is updated for new service linting of new source code see snyk scanning of new dependencies new tests are running on pr cname record pointing to metric lambda
1
165,731
6,284,612,628
IssuesEvent
2017-07-19 08:14:54
wordpress-mobile/AztecEditor-Android
https://api.github.com/repos/wordpress-mobile/AztecEditor-Android
closed
Crash report 7.8: NullPointerException in DisplayUtils.dpToPx
bug high priority
Fatal Exception: java.lang.NullPointerException at org.wordpress.android.util.DisplayUtils.dpToPx(DisplayUtils.java:49) at org.wordpress.android.editor.AztecEditorFragment$8.onResponse(AztecEditorFragment.java:604) at com.android.volley.toolbox.ImageLoader$4.run(ImageLoader.java:474) at android.os.Handler.handleCallback(Handler.java:733) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:136) at android.app.ActivityThread.main(ActivityThread.java:5584) at java.lang.reflect.Method.invokeNative(Method.java) at java.lang.reflect.Method.invoke(Method.java:515) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1268) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1084) at dalvik.system.NativeStart.main(NativeStart.java)
1.0
Crash report 7.8: NullPointerException in DisplayUtils.dpToPx - Fatal Exception: java.lang.NullPointerException at org.wordpress.android.util.DisplayUtils.dpToPx(DisplayUtils.java:49) at org.wordpress.android.editor.AztecEditorFragment$8.onResponse(AztecEditorFragment.java:604) at com.android.volley.toolbox.ImageLoader$4.run(ImageLoader.java:474) at android.os.Handler.handleCallback(Handler.java:733) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:136) at android.app.ActivityThread.main(ActivityThread.java:5584) at java.lang.reflect.Method.invokeNative(Method.java) at java.lang.reflect.Method.invoke(Method.java:515) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1268) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1084) at dalvik.system.NativeStart.main(NativeStart.java)
priority
crash report nullpointerexception in displayutils dptopx fatal exception java lang nullpointerexception at org wordpress android util displayutils dptopx displayutils java at org wordpress android editor azteceditorfragment onresponse azteceditorfragment java at com android volley toolbox imageloader run imageloader java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invokenative method java at java lang reflect method invoke method java at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java at dalvik system nativestart main nativestart java
1
739,515
25,600,557,003
IssuesEvent
2022-12-01 19:48:23
bounswe/bounswe2022group9
https://api.github.com/repos/bounswe/bounswe2022group9
closed
[Backend] Review exhibition feature pull request
Enhancement Priority: High Backend
Deadline: 02.11.2022 23.59 TODO: - [x] The pull request for exhibition should be reviewed. - [x] The pull request should be merged.
1.0
[Backend] Review exhibition feature pull request - Deadline: 02.11.2022 23.59 TODO: - [x] The pull request for exhibition should be reviewed. - [x] The pull request should be merged.
priority
review exhibition feature pull request deadline todo the pull request for exhibition should be reviewed the pull request should be merged
1
495,145
14,272,663,678
IssuesEvent
2020-11-21 18:00:52
bounswe/bounswe2020group5
https://api.github.com/repos/bounswe/bounswe2020group5
closed
backend/ Implementation of user models
Priority: High Status: In Progress assignment
The custom user models should be implemented. Deadline: 20.11.2020 @19.00
1.0
backend/ Implementation of user models - The custom user models should be implemented. Deadline: 20.11.2020 @19.00
priority
backend implementation of user models the custom user models should be implemented deadline
1
145,504
5,576,932,287
IssuesEvent
2017-03-28 08:21:31
HPI-SWA-Lab/BP2016H1
https://api.github.com/repos/HPI-SWA-Lab/BP2016H1
closed
Fontprojekt
priority high question
- direkt in Squeak - mit Datenstrukturen - mit Infos wie "42 von 666 Glyphen fertig" - evtl kann Interaktion der Tools darauf aufbauen? - Texte mitspeichern
1.0
Fontprojekt - - direkt in Squeak - mit Datenstrukturen - mit Infos wie "42 von 666 Glyphen fertig" - evtl kann Interaktion der Tools darauf aufbauen? - Texte mitspeichern
priority
fontprojekt direkt in squeak mit datenstrukturen mit infos wie von glyphen fertig evtl kann interaktion der tools darauf aufbauen texte mitspeichern
1
556,405
16,483,622,215
IssuesEvent
2021-05-24 14:55:10
AbsaOSS/enceladus
https://api.github.com/repos/AbsaOSS/enceladus
opened
run_enceladus.sh ingores num of executors, executor memory and cores
bug priority: high run scripts
## Describe the bug The setting of the initial `CMD_LINE` is too low and needs to be moved from line [458](https://github.com/AbsaOSS/enceladus/blob/develop/scripts/bash/run_enceladus.sh#L458) to line 401. Thank you @hamc17 for finding it!
1.0
run_enceladus.sh ingores num of executors, executor memory and cores - ## Describe the bug The setting of the initial `CMD_LINE` is too low and needs to be moved from line [458](https://github.com/AbsaOSS/enceladus/blob/develop/scripts/bash/run_enceladus.sh#L458) to line 401. Thank you @hamc17 for finding it!
priority
run enceladus sh ingores num of executors executor memory and cores describe the bug the setting of the initial cmd line is too low and needs to be moved from line to line thank you for finding it
1
701,776
24,107,978,442
IssuesEvent
2022-09-20 09:01:55
VCityTeam/UD-Viz
https://api.github.com/repos/VCityTeam/UD-Viz
closed
New version of UDVIZ and dynamic layer not working
bug priority-high
We are working with @valentinMachado and @mathieuLivebardon to update the ReAgent (Gratte Ciel) demo to make it compatible with the last version of UD_VIZ (2.38) However after spending time with @valentinMachado we didn't manage to make it work (the install and the build are ok) but not the run time where the bounding box is not displayed as it is suppose to and the dynamic layer either This is what I have in the old working version (ud_viz 2.37). Boudning box + json layer <img width="1341" alt="Screenshot 2022-09-19 at 16 15 25" src="https://user-images.githubusercontent.com/3928502/191038728-d32dfa77-3822-4855-b5de-39b5856c0052.png"> and this is what I have with the ud_viz 2.38 (the 3D tile is now displayed by default) but we don't see (actually I see it very quickly the blue bounding box and the json layer) <img width="1317" alt="Screenshot 2022-09-19 at 16 10 48" src="https://user-images.githubusercontent.com/3928502/191038144-72759fc4-0584-40b5-b345-6b3d66c75b65.png"> The main difference is the usage of ```app.view3D.getItownsView()``` instead of ```app.view``` As agreed with @valentinMachado the current master branch of reagent is working well with UD_VIZ and can be found here https://github.com/VCityTeam/UD_ReAgent_ABM/tree/master and the dev branch is this one https://github.com/VCityTeam/UD_ReAgent_ABM/tree/dev_ud_viz_2_38
1.0
New version of UDVIZ and dynamic layer not working - We are working with @valentinMachado and @mathieuLivebardon to update the ReAgent (Gratte Ciel) demo to make it compatible with the last version of UD_VIZ (2.38) However after spending time with @valentinMachado we didn't manage to make it work (the install and the build are ok) but not the run time where the bounding box is not displayed as it is suppose to and the dynamic layer either This is what I have in the old working version (ud_viz 2.37). Boudning box + json layer <img width="1341" alt="Screenshot 2022-09-19 at 16 15 25" src="https://user-images.githubusercontent.com/3928502/191038728-d32dfa77-3822-4855-b5de-39b5856c0052.png"> and this is what I have with the ud_viz 2.38 (the 3D tile is now displayed by default) but we don't see (actually I see it very quickly the blue bounding box and the json layer) <img width="1317" alt="Screenshot 2022-09-19 at 16 10 48" src="https://user-images.githubusercontent.com/3928502/191038144-72759fc4-0584-40b5-b345-6b3d66c75b65.png"> The main difference is the usage of ```app.view3D.getItownsView()``` instead of ```app.view``` As agreed with @valentinMachado the current master branch of reagent is working well with UD_VIZ and can be found here https://github.com/VCityTeam/UD_ReAgent_ABM/tree/master and the dev branch is this one https://github.com/VCityTeam/UD_ReAgent_ABM/tree/dev_ud_viz_2_38
priority
new version of udviz and dynamic layer not working we are working with valentinmachado and mathieulivebardon to update the reagent gratte ciel demo to make it compatible with the last version of ud viz however after spending time with valentinmachado we didn t manage to make it work the install and the build are ok but not the run time where the bounding box is not displayed as it is suppose to and the dynamic layer either this is what i have in the old working version ud viz boudning box json layer img width alt screenshot at src and this is what i have with the ud viz the tile is now displayed by default but we don t see actually i see it very quickly the blue bounding box and the json layer img width alt screenshot at src the main difference is the usage of app getitownsview instead of app view as agreed with valentinmachado the current master branch of reagent is working well with ud viz and can be found here and the dev branch is this one
1
301,614
9,222,463,681
IssuesEvent
2019-03-11 23:00:14
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
Single player should start in 32 bit
High Priority
This got disabled for today's hotfix, but we need to revive it for the next fix as it give 40% mem savings. Since migration is the problem, we can skip it for this case and force load the 64 bit version. So when starting a server: - If the world is from a prior version, always load 64 bit - If the world is from current version or later, dynamically choose 32 or 64 bit based on the save - If the world is brand new, start 32 bit if the size can handle it (and since theres no size option in the client-started worlds, always do 32 bit there)
1.0
Single player should start in 32 bit - This got disabled for today's hotfix, but we need to revive it for the next fix as it give 40% mem savings. Since migration is the problem, we can skip it for this case and force load the 64 bit version. So when starting a server: - If the world is from a prior version, always load 64 bit - If the world is from current version or later, dynamically choose 32 or 64 bit based on the save - If the world is brand new, start 32 bit if the size can handle it (and since theres no size option in the client-started worlds, always do 32 bit there)
priority
single player should start in bit this got disabled for today s hotfix but we need to revive it for the next fix as it give mem savings since migration is the problem we can skip it for this case and force load the bit version so when starting a server if the world is from a prior version always load bit if the world is from current version or later dynamically choose or bit based on the save if the world is brand new start bit if the size can handle it and since theres no size option in the client started worlds always do bit there
1
477,034
13,754,392,810
IssuesEvent
2020-10-06 16:52:49
ROCmSoftwarePlatform/MIOpen
https://api.github.com/repos/ROCmSoftwarePlatform/MIOpen
closed
No writing to user find-db when fast-hybrid mode is enabled.
enhancement priority_high request_for_comments
Extension of #299 (implemented in PR #326). @atamazov Can you or @DrizztDoUrden take this task?
1.0
No writing to user find-db when fast-hybrid mode is enabled. - Extension of #299 (implemented in PR #326). @atamazov Can you or @DrizztDoUrden take this task?
priority
no writing to user find db when fast hybrid mode is enabled extension of implemented in pr atamazov can you or drizztdourden take this task
1
687,533
23,530,670,658
IssuesEvent
2022-08-19 15:01:39
cryostatio/cryostat
https://api.github.com/repos/cryostatio/cryostat
closed
[Task] Pluggable Discovery In-Place Model Updates
high-priority feat dependent
Depends on https://github.com/cryostatio/cryostat/issues/937 Parent: https://github.com/cryostatio/cryostat/issues/936 To serve parent 4 and 6, once we have a database of environment and target nodes, we should use the platform capabilities to listen for changes to these nodes and use those notifications to dynamically update the database contents. For example, in OpenShift, if one of our Endpoints objects disappears, then it should be removed from the database. The parent Pod that contained the Endpoints should be checked, and if it has no other child Endpoints, the Pod should also be removed. Then the Pod's parent Deployment(Config) should be checked and removed if it has no other Pod children.
1.0
[Task] Pluggable Discovery In-Place Model Updates - Depends on https://github.com/cryostatio/cryostat/issues/937 Parent: https://github.com/cryostatio/cryostat/issues/936 To serve parent 4 and 6, once we have a database of environment and target nodes, we should use the platform capabilities to listen for changes to these nodes and use those notifications to dynamically update the database contents. For example, in OpenShift, if one of our Endpoints objects disappears, then it should be removed from the database. The parent Pod that contained the Endpoints should be checked, and if it has no other child Endpoints, the Pod should also be removed. Then the Pod's parent Deployment(Config) should be checked and removed if it has no other Pod children.
priority
pluggable discovery in place model updates depends on parent to serve parent and once we have a database of environment and target nodes we should use the platform capabilities to listen for changes to these nodes and use those notifications to dynamically update the database contents for example in openshift if one of our endpoints objects disappears then it should be removed from the database the parent pod that contained the endpoints should be checked and if it has no other child endpoints the pod should also be removed then the pod s parent deployment config should be checked and removed if it has no other pod children
1
601,244
18,393,203,237
IssuesEvent
2021-10-12 08:27:53
w3ctag/design-reviews
https://api.github.com/repos/w3ctag/design-reviews
closed
Media Source Extensions for WebCodecs
Topic: media Progress: review complete Resolution: satisfied Review type: early review Venue: Media WG Progress: propose closing chromium-high-priority
HIQaH! QaH! TAG! I'm requesting a TAG review of Media Source Extensions for WebCodecs. The [Media Source Extensions API (MSE)](https://www.w3.org/TR/media-source/) requires applications to provide fragments of containerized media (such as fragmented MP4, WebM, or MP3) to be able to buffer and play that media. If the application already has the media in a parsed and structured form, it can cost extra latency and code complexity to package that media into a container and feed it to MSE. As the web platform is evolving to give applications lower-level abstractions for efficiently encoding and decoding media via the [WebCodecs API](https://github.com/WICG/web-codecs), MSE can use WebCodec's media structures to let applications more efficiently feed their media to MSE. We propose additional MSE methods to provide web authors easy ability to buffer "containerless" media as an alternative to existing MSE methods that required containerized, "muxed" media. - Explainer (minimally containing user needs and example code): https://github.com/wolenetz/mse-for-webcodecs/blob/main/explainer.md - Security and Privacy self-review: https://github.com/wolenetz/mse-for-webcodecs/blob/main/security-privacy-questionnaire.md - Primary contacts (and their relationship to the specification): - Matt Wolenetz (@wolenetz, Google) - MSE spec co-editor and editor of this new feature. - Organization/project driving the design: Google Chrome - External status/issue trackers for this feature (publicly visible, e.g. Chrome Status): - Chrome Status: https://www.chromestatus.com/features/5649291471224832 - MSE specification issue: https://github.com/w3c/media-source/issues/184#issuecomment-720771445 - Chromium feature bug: https://bugs.chromium.org/p/chromium/issues/detail?id=1144908 Further details: - [x] I have reviewed the TAG's [API Design Principles](https://w3ctag.github.io/design-principles/) - The group where the incubation/design work on this is being done (or is intended to be done in the future): [Media WG](https://www.w3.org/media-wg/) - The group where standardization of this work is intended to be done ("unknown" if not known): [Media WG](https://www.w3.org/media-wg/) - Major unresolved issues with or opposition to this design: None. Minor open questions are listed in the [Explainer's Open Questions section](https://github.com/wolenetz/mse-for-webcodecs/blob/main/explainer.md#open-questions--notes--links) You should also know that... - The review is early in the prototyping and design stage because we expect to learn from early implementation feedback more precise answers to the [Explainer's Open Questions section](https://github.com/wolenetz/mse-for-webcodecs/blob/main/explainer.md#open-questions--notes--links), so we'd like to get a rough design reviewed to help refine eventual implementation and specification for this feature. We'd prefer the TAG provide feedback as: 💬 leave review feedback as a **comment in this issue** and @-notify @wolenetz
1.0
Media Source Extensions for WebCodecs - HIQaH! QaH! TAG! I'm requesting a TAG review of Media Source Extensions for WebCodecs. The [Media Source Extensions API (MSE)](https://www.w3.org/TR/media-source/) requires applications to provide fragments of containerized media (such as fragmented MP4, WebM, or MP3) to be able to buffer and play that media. If the application already has the media in a parsed and structured form, it can cost extra latency and code complexity to package that media into a container and feed it to MSE. As the web platform is evolving to give applications lower-level abstractions for efficiently encoding and decoding media via the [WebCodecs API](https://github.com/WICG/web-codecs), MSE can use WebCodec's media structures to let applications more efficiently feed their media to MSE. We propose additional MSE methods to provide web authors easy ability to buffer "containerless" media as an alternative to existing MSE methods that required containerized, "muxed" media. - Explainer (minimally containing user needs and example code): https://github.com/wolenetz/mse-for-webcodecs/blob/main/explainer.md - Security and Privacy self-review: https://github.com/wolenetz/mse-for-webcodecs/blob/main/security-privacy-questionnaire.md - Primary contacts (and their relationship to the specification): - Matt Wolenetz (@wolenetz, Google) - MSE spec co-editor and editor of this new feature. - Organization/project driving the design: Google Chrome - External status/issue trackers for this feature (publicly visible, e.g. Chrome Status): - Chrome Status: https://www.chromestatus.com/features/5649291471224832 - MSE specification issue: https://github.com/w3c/media-source/issues/184#issuecomment-720771445 - Chromium feature bug: https://bugs.chromium.org/p/chromium/issues/detail?id=1144908 Further details: - [x] I have reviewed the TAG's [API Design Principles](https://w3ctag.github.io/design-principles/) - The group where the incubation/design work on this is being done (or is intended to be done in the future): [Media WG](https://www.w3.org/media-wg/) - The group where standardization of this work is intended to be done ("unknown" if not known): [Media WG](https://www.w3.org/media-wg/) - Major unresolved issues with or opposition to this design: None. Minor open questions are listed in the [Explainer's Open Questions section](https://github.com/wolenetz/mse-for-webcodecs/blob/main/explainer.md#open-questions--notes--links) You should also know that... - The review is early in the prototyping and design stage because we expect to learn from early implementation feedback more precise answers to the [Explainer's Open Questions section](https://github.com/wolenetz/mse-for-webcodecs/blob/main/explainer.md#open-questions--notes--links), so we'd like to get a rough design reviewed to help refine eventual implementation and specification for this feature. We'd prefer the TAG provide feedback as: 💬 leave review feedback as a **comment in this issue** and @-notify @wolenetz
priority
media source extensions for webcodecs hiqah qah tag i m requesting a tag review of media source extensions for webcodecs the requires applications to provide fragments of containerized media such as fragmented webm or to be able to buffer and play that media if the application already has the media in a parsed and structured form it can cost extra latency and code complexity to package that media into a container and feed it to mse as the web platform is evolving to give applications lower level abstractions for efficiently encoding and decoding media via the mse can use webcodec s media structures to let applications more efficiently feed their media to mse we propose additional mse methods to provide web authors easy ability to buffer containerless media as an alternative to existing mse methods that required containerized muxed media explainer minimally containing user needs and example code security and privacy self review primary contacts and their relationship to the specification matt wolenetz wolenetz google mse spec co editor and editor of this new feature organization project driving the design google chrome external status issue trackers for this feature publicly visible e g chrome status chrome status mse specification issue chromium feature bug further details i have reviewed the tag s the group where the incubation design work on this is being done or is intended to be done in the future the group where standardization of this work is intended to be done unknown if not known major unresolved issues with or opposition to this design none minor open questions are listed in the you should also know that the review is early in the prototyping and design stage because we expect to learn from early implementation feedback more precise answers to the so we d like to get a rough design reviewed to help refine eventual implementation and specification for this feature we d prefer the tag provide feedback as 💬 leave review feedback as a comment in this issue and notify wolenetz
1
220,142
7,353,395,995
IssuesEvent
2018-03-09 00:28:39
PatHock/Mice-tro
https://api.github.com/repos/PatHock/Mice-tro
closed
Restructure project as a Gradle project
priority-high
While implementing a directory structure (Issue #8), I found that this work can be facilitated with the use of a build tool such as Maven or Gradle. Todo: redo our project as a Gradle project.
1.0
Restructure project as a Gradle project - While implementing a directory structure (Issue #8), I found that this work can be facilitated with the use of a build tool such as Maven or Gradle. Todo: redo our project as a Gradle project.
priority
restructure project as a gradle project while implementing a directory structure issue i found that this work can be facilitated with the use of a build tool such as maven or gradle todo redo our project as a gradle project
1
786,061
27,633,040,026
IssuesEvent
2023-03-10 12:25:03
Nordix/Meridio
https://api.github.com/repos/Nordix/Meridio
opened
Old NSM interface in LB might cause traffic disturbance
kind/bug priority/high concept/attractor area/networking component/stateless-lb
**Describe the bug** In case a Proxy is terminated abruptly it has no chance to Close() the NSM connections it maintains towards LBs. In such cases NSM provides a timeout mechanism (governed by MaxTokenLifetime) to issue a Close() on such connections. Howver, occasionally these NSM originated Close() calls fail, thus the related NSM interface is not removed from the LB side. If later a new NSM connection gets established toward the affected LB for which the IP subnet matches the previously stucked interface's subnet, it might cause forwarding issues. That's because the kernel might try to send packets through the old dangling interface. Solution proposal in Meridio: In an LB normally there should be only 1 interface for a particular subnet which connects it to a proxy "owning" the subnet. Thus whenever a new NSM connection is established towards an LB it could check if there are any other interfaces in the POD that might share the same subnet. If there are, their state should be set to DOWN. **To Reproduce** Steps to reproduce the behavior: ??? So far has been noticed after reboots of workers hosting Proxies. Might be possible to **Expected behavior** Old interface should not get stuck. Otherwise the timeout mechanism NSM offers doesn't provide satisfactory recovery time. **Context** - Network Service Mesh: v1.6.1 - Meridio: v1.0.1 **Logs** Add logs here.
1.0
Old NSM interface in LB might cause traffic disturbance - **Describe the bug** In case a Proxy is terminated abruptly it has no chance to Close() the NSM connections it maintains towards LBs. In such cases NSM provides a timeout mechanism (governed by MaxTokenLifetime) to issue a Close() on such connections. Howver, occasionally these NSM originated Close() calls fail, thus the related NSM interface is not removed from the LB side. If later a new NSM connection gets established toward the affected LB for which the IP subnet matches the previously stucked interface's subnet, it might cause forwarding issues. That's because the kernel might try to send packets through the old dangling interface. Solution proposal in Meridio: In an LB normally there should be only 1 interface for a particular subnet which connects it to a proxy "owning" the subnet. Thus whenever a new NSM connection is established towards an LB it could check if there are any other interfaces in the POD that might share the same subnet. If there are, their state should be set to DOWN. **To Reproduce** Steps to reproduce the behavior: ??? So far has been noticed after reboots of workers hosting Proxies. Might be possible to **Expected behavior** Old interface should not get stuck. Otherwise the timeout mechanism NSM offers doesn't provide satisfactory recovery time. **Context** - Network Service Mesh: v1.6.1 - Meridio: v1.0.1 **Logs** Add logs here.
priority
old nsm interface in lb might cause traffic disturbance describe the bug in case a proxy is terminated abruptly it has no chance to close the nsm connections it maintains towards lbs in such cases nsm provides a timeout mechanism governed by maxtokenlifetime to issue a close on such connections howver occasionally these nsm originated close calls fail thus the related nsm interface is not removed from the lb side if later a new nsm connection gets established toward the affected lb for which the ip subnet matches the previously stucked interface s subnet it might cause forwarding issues that s because the kernel might try to send packets through the old dangling interface solution proposal in meridio in an lb normally there should be only interface for a particular subnet which connects it to a proxy owning the subnet thus whenever a new nsm connection is established towards an lb it could check if there are any other interfaces in the pod that might share the same subnet if there are their state should be set to down to reproduce steps to reproduce the behavior so far has been noticed after reboots of workers hosting proxies might be possible to expected behavior old interface should not get stuck otherwise the timeout mechanism nsm offers doesn t provide satisfactory recovery time context network service mesh meridio logs add logs here
1
802,474
28,963,691,140
IssuesEvent
2023-05-10 06:07:09
pombase/pombase-chado
https://api.github.com/repos/pombase/pombase-chado
closed
Combine not_assayed and wild-type product level in Chado
high priority next
We need not_assayed and wild-type product level geneotypes to be stored in the same way (and therefore displayed together on the website)
1.0
Combine not_assayed and wild-type product level in Chado - We need not_assayed and wild-type product level geneotypes to be stored in the same way (and therefore displayed together on the website)
priority
combine not assayed and wild type product level in chado we need not assayed and wild type product level geneotypes to be stored in the same way and therefore displayed together on the website
1
594,750
18,052,911,609
IssuesEvent
2021-09-20 01:36:34
ctm/mb2-doc
https://api.github.com/repos/ctm/mb2-doc
closed
client crashed
bug high priority easy
In this evening's TBOE game the client crashed. Initially, I figured it was just for me because I was using the graphical view, but it was happening to everyone. The crash was on this line: ``` let diff = action - p.action; ``` in `calls_or_raises_to_helper`. It looks like it should be `p.this_round_action`, but I can't figure out why I haven't run into that issue before. I also haven't been able to make it crash locally, so I may have to strip the `easy` label from this issue.
1.0
client crashed - In this evening's TBOE game the client crashed. Initially, I figured it was just for me because I was using the graphical view, but it was happening to everyone. The crash was on this line: ``` let diff = action - p.action; ``` in `calls_or_raises_to_helper`. It looks like it should be `p.this_round_action`, but I can't figure out why I haven't run into that issue before. I also haven't been able to make it crash locally, so I may have to strip the `easy` label from this issue.
priority
client crashed in this evening s tboe game the client crashed initially i figured it was just for me because i was using the graphical view but it was happening to everyone the crash was on this line let diff action p action in calls or raises to helper it looks like it should be p this round action but i can t figure out why i haven t run into that issue before i also haven t been able to make it crash locally so i may have to strip the easy label from this issue
1
489,259
14,103,571,993
IssuesEvent
2020-11-06 10:28:56
markovejnovic/vim-dssl2
https://api.github.com/repos/markovejnovic/vim-dssl2
opened
Style Guide isn't enforced.
Priority: 2️⃣ High Status: ✔️ Available Type: ⚙️ Maintenance
Most of the codebase doesn't adhere to the style guide used in the repo. That is something that needs to get fixed.
1.0
Style Guide isn't enforced. - Most of the codebase doesn't adhere to the style guide used in the repo. That is something that needs to get fixed.
priority
style guide isn t enforced most of the codebase doesn t adhere to the style guide used in the repo that is something that needs to get fixed
1