Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
630,976 | 20,122,708,411 | IssuesEvent | 2022-02-08 05:20:56 | airavata-courses/neo | https://api.github.com/repos/airavata-courses/neo | closed | Containerize API gateway with Docker and configure image build in docker-compose | Type: Task Status: In Progress Priority: High BE | For #13, create a Dockerfile for building a Docker image and test connection on localhost of host machine.
Configure the build of this Dockerfile in docker-compose YAML file which contains instructions for spinning up and connecting all other containers.
| 1.0 | Containerize API gateway with Docker and configure image build in docker-compose - For #13, create a Dockerfile for building a Docker image and test connection on localhost of host machine.
Configure the build of this Dockerfile in docker-compose YAML file which contains instructions for spinning up and connecting all other containers.
| priority | containerize api gateway with docker and configure image build in docker compose for create a dockerfile for building a docker image and test connection on localhost of host machine configure the build of this dockerfile in docker compose yaml file which contains instructions for spinning up and connecting all other containers | 1 |
124,239 | 4,894,099,022 | IssuesEvent | 2016-11-19 03:50:12 | caver456/issue_test | https://api.github.com/repos/caver456/issue_test | opened | lag at end of new entry | bug help wanted Priority:High | in newEntryPost do a oneshot for layoutChanged and everything it depends on; that way the main table can update quicker (i.e. newEntryPost can complete quicker) | 1.0 | lag at end of new entry - in newEntryPost do a oneshot for layoutChanged and everything it depends on; that way the main table can update quicker (i.e. newEntryPost can complete quicker) | priority | lag at end of new entry in newentrypost do a oneshot for layoutchanged and everything it depends on that way the main table can update quicker i e newentrypost can complete quicker | 1 |
768,496 | 26,965,923,091 | IssuesEvent | 2023-02-08 22:21:36 | Darunada/namechangr | https://api.github.com/repos/Darunada/namechangr | closed | Utah Sex Offender Registry Form is out of date | bug help-wanted high-priority | The form needs to be updated with the newest version issued by BCI. I reached out to BCI for a docx formatted version and they said talk to the court. The court said BCI provides the form so talk to them. Ah government. Someone good with Word will need to recreate it in docx format, since I can't fill a PDF, I think. | 1.0 | Utah Sex Offender Registry Form is out of date - The form needs to be updated with the newest version issued by BCI. I reached out to BCI for a docx formatted version and they said talk to the court. The court said BCI provides the form so talk to them. Ah government. Someone good with Word will need to recreate it in docx format, since I can't fill a PDF, I think. | priority | utah sex offender registry form is out of date the form needs to be updated with the newest version issued by bci i reached out to bci for a docx formatted version and they said talk to the court the court said bci provides the form so talk to them ah government someone good with word will need to recreate it in docx format since i can t fill a pdf i think | 1 |
523,373 | 15,179,329,302 | IssuesEvent | 2021-02-14 19:05:33 | jesus-collective/mobile | https://api.github.com/repos/jesus-collective/mobile | closed | Wrong phone number format | High Priority | Can we please filter out any characters that are not numbers from the phone number submission by a user in the "tell us more about you" screen when creating accounts?
Currently, if the user enters dashes or brackets, etc it doesn't go through, but some users don't see the feedback error. | 1.0 | Wrong phone number format - Can we please filter out any characters that are not numbers from the phone number submission by a user in the "tell us more about you" screen when creating accounts?
Currently, if the user enters dashes or brackets, etc it doesn't go through, but some users don't see the feedback error. | priority | wrong phone number format can we please filter out any characters that are not numbers from the phone number submission by a user in the tell us more about you screen when creating accounts currently if the user enters dashes or brackets etc it doesn t go through but some users don t see the feedback error | 1 |
102,395 | 4,155,344,654 | IssuesEvent | 2016-06-16 14:40:17 | RestComm/mediaserver | https://api.github.com/repos/RestComm/mediaserver | opened | Implement Concurrency Model for MGCP Stack | enhancement High-Priority MGCP task | Implement a proper concurrency model for the new MGCP stack. | 1.0 | Implement Concurrency Model for MGCP Stack - Implement a proper concurrency model for the new MGCP stack. | priority | implement concurrency model for mgcp stack implement a proper concurrency model for the new mgcp stack | 1 |
252,119 | 8,032,179,948 | IssuesEvent | 2018-07-28 12:16:01 | Extum/flarum-ext-material | https://api.github.com/repos/Extum/flarum-ext-material | opened | [Dark Mode] Admin dashboard color fix | bug dark mode priority: high | **Describe the bug**
With dark mode enabled, the admin dashboard colors look a bit messed up.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to admin panel
2. See error
**Expected behavior**
The sidebar background color should be darker.
**Screenshots**

**Environment (please complete the following information):**
- OS: macOS Mojave
- Browser: Firefox
- Flarum Version 0.1.0-beta.7 | 1.0 | [Dark Mode] Admin dashboard color fix - **Describe the bug**
With dark mode enabled, the admin dashboard colors look a bit messed up.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to admin panel
2. See error
**Expected behavior**
The sidebar background color should be darker.
**Screenshots**

**Environment (please complete the following information):**
- OS: macOS Mojave
- Browser: Firefox
- Flarum Version 0.1.0-beta.7 | priority | admin dashboard color fix describe the bug with dark mode enabled the admin dashboard colors look a bit messed up to reproduce steps to reproduce the behavior go to admin panel see error expected behavior the sidebar background color should be darker screenshots environment please complete the following information os macos mojave browser firefox flarum version beta | 1 |
237,754 | 7,763,882,291 | IssuesEvent | 2018-06-01 18:11:03 | IUNetSci/hoaxy-backend | https://api.github.com/repos/IUNetSci/hoaxy-backend | opened | Invalid transaction is causing TopArticles API to fail | high priority | The log shows this error:
```
StatementError: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back [SQL: u'SELECT max(top20_article_monthly.upper_day) AS max_1 \nFROM top20_article_monthly'] [parameters: [{}]]
2018-06-01 14:04:45,055 - hoaxy(api) - ERROR: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back [SQL: u'SELECT max(top20_article_monthly.upper_day) AS max_1 \nFROM top20_article_monthly'] [parameters: [{}]]
Traceback (most recent call last):
File "/home/data/apps/hoaxy-backend/hoaxy/backend/api.py", line 582, in query_top_articles
df = db_query_top_articles(engine, **q_kwargs)
File "/home/data/apps/hoaxy-backend/hoaxy/ir/search.py", line 895, in db_query_top_articles
upper_day = get_max(session, Top20ArticleMonthly.upper_day)
File "/home/data/apps/hoaxy-backend/hoaxy/database/functions.py", line 105, in get_max
return q.scalar()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2785, in scalar
ret = self.one()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2756, in one
ret = self.one_or_none()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2726, in one_or_none
ret = list(self)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2797, in __iter__
return self._execute_and_instances(context)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2820, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1121, in _execute_context
None, None)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
exc_info
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1114, in _execute_context
conn = self._revalidate_connection()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 424, in _revalidate_connection
"Can't reconnect until invalid "
StatementError: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back [SQL: u'SELECT max(top20_article_monthly.upper_day) AS max_1 \nFROM top20_article_monthly'] [parameters: [{}]]
``` | 1.0 | Invalid transaction is causing TopArticles API to fail - The log shows this error:
```
StatementError: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back [SQL: u'SELECT max(top20_article_monthly.upper_day) AS max_1 \nFROM top20_article_monthly'] [parameters: [{}]]
2018-06-01 14:04:45,055 - hoaxy(api) - ERROR: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back [SQL: u'SELECT max(top20_article_monthly.upper_day) AS max_1 \nFROM top20_article_monthly'] [parameters: [{}]]
Traceback (most recent call last):
File "/home/data/apps/hoaxy-backend/hoaxy/backend/api.py", line 582, in query_top_articles
df = db_query_top_articles(engine, **q_kwargs)
File "/home/data/apps/hoaxy-backend/hoaxy/ir/search.py", line 895, in db_query_top_articles
upper_day = get_max(session, Top20ArticleMonthly.upper_day)
File "/home/data/apps/hoaxy-backend/hoaxy/database/functions.py", line 105, in get_max
return q.scalar()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2785, in scalar
ret = self.one()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2756, in one
ret = self.one_or_none()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2726, in one_or_none
ret = list(self)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2797, in __iter__
return self._execute_and_instances(context)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2820, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1121, in _execute_context
None, None)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
exc_info
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1114, in _execute_context
conn = self._revalidate_connection()
File "/u/truthy/miniconda3/envs/hoaxy-backend/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 424, in _revalidate_connection
"Can't reconnect until invalid "
StatementError: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back [SQL: u'SELECT max(top20_article_monthly.upper_day) AS max_1 \nFROM top20_article_monthly'] [parameters: [{}]]
``` | priority | invalid transaction is causing toparticles api to fail the log shows this error statementerror sqlalchemy exc invalidrequesterror can t reconnect until invalid transaction is rolled back hoaxy api error sqlalchemy exc invalidrequesterror can t reconnect until invalid transaction is rolled back traceback most recent call last file home data apps hoaxy backend hoaxy backend api py line in query top articles df db query top articles engine q kwargs file home data apps hoaxy backend hoaxy ir search py line in db query top articles upper day get max session upper day file home data apps hoaxy backend hoaxy database functions py line in get max return q scalar file u truthy envs hoaxy backend lib site packages sqlalchemy orm query py line in scalar ret self one file u truthy envs hoaxy backend lib site packages sqlalchemy orm query py line in one ret self one or none file u truthy envs hoaxy backend lib site packages sqlalchemy orm query py line in one or none ret list self file u truthy envs hoaxy backend lib site packages sqlalchemy orm query py line in iter return self execute and instances context file u truthy envs hoaxy backend lib site packages sqlalchemy orm query py line in execute and instances result conn execute querycontext statement self params file u truthy envs hoaxy backend lib site packages sqlalchemy engine base py line in execute return meth self multiparams params file u truthy envs hoaxy backend lib site packages sqlalchemy sql elements py line in execute on connection return connection execute clauseelement self multiparams params file u truthy envs hoaxy backend lib site packages sqlalchemy engine base py line in execute clauseelement compiled sql distilled params file u truthy envs hoaxy backend lib site packages sqlalchemy engine base py line in execute context none none file u truthy envs hoaxy backend lib site packages sqlalchemy engine base py line in handle dbapi exception exc info file u truthy envs hoaxy backend lib site packages sqlalchemy util compat py line in raise from cause reraise type exception exception tb exc tb cause cause file u truthy envs hoaxy backend lib site packages sqlalchemy engine base py line in execute context conn self revalidate connection file u truthy envs hoaxy backend lib site packages sqlalchemy engine base py line in revalidate connection can t reconnect until invalid statementerror sqlalchemy exc invalidrequesterror can t reconnect until invalid transaction is rolled back | 1 |
661,949 | 22,097,224,642 | IssuesEvent | 2022-06-01 11:06:42 | ooni/ooni.org | https://api.github.com/repos/ooni/ooni.org | closed | Facilitate OONI training for civil society groups in Zimbabwe | priority/high workshop | On 1st June 2022 I'll be facilitating an OONI training for civil society groups in Zimbabwe. In preparation, I'll be creating relevant workshop slides and hands-on exercises. | 1.0 | Facilitate OONI training for civil society groups in Zimbabwe - On 1st June 2022 I'll be facilitating an OONI training for civil society groups in Zimbabwe. In preparation, I'll be creating relevant workshop slides and hands-on exercises. | priority | facilitate ooni training for civil society groups in zimbabwe on june i ll be facilitating an ooni training for civil society groups in zimbabwe in preparation i ll be creating relevant workshop slides and hands on exercises | 1 |
306,851 | 9,412,244,246 | IssuesEvent | 2019-04-10 03:08:42 | CS2103-AY1819S2-W15-2/main | https://api.github.com/repos/CS2103-AY1819S2-W15-2/main | closed | Undo command does not recalculate budget | bug priority.High | **Describe the bug**
Undo command does not undo the action on budget.
**To Reproduce**
Steps to reproduce the behavior:
1. Add a budget for transport with amount $20, start date 05-04-2019 and end date 20-04-2019.
2. Add an expense for "bus" with cost $5 and category TRANSPORT with date 05-04-2019. This expense is reflected on the transport budget.
3. Undo
**Expected behavior**
The transport budget should undo its update, and have an amount of $0 after the undo command.
**Screenshots**
If applicable, add screenshots to help explain your problem

**Additional context**
Add any other context about the problem here.
<hr>
**Reported by:** @kev-inc
**Severity:** Medium
<sub>[original: nus-cs2103-AY1819S2/pe-dry-run#623]</sub> | 1.0 | Undo command does not recalculate budget - **Describe the bug**
Undo command does not undo the action on budget.
**To Reproduce**
Steps to reproduce the behavior:
1. Add a budget for transport with amount $20, start date 05-04-2019 and end date 20-04-2019.
2. Add an expense for "bus" with cost $5 and category TRANSPORT with date 05-04-2019. This expense is reflected on the transport budget.
3. Undo
**Expected behavior**
The transport budget should undo its update, and have an amount of $0 after the undo command.
**Screenshots**
If applicable, add screenshots to help explain your problem

**Additional context**
Add any other context about the problem here.
<hr>
**Reported by:** @kev-inc
**Severity:** Medium
<sub>[original: nus-cs2103-AY1819S2/pe-dry-run#623]</sub> | priority | undo command does not recalculate budget describe the bug undo command does not undo the action on budget to reproduce steps to reproduce the behavior add a budget for transport with amount start date and end date add an expense for bus with cost and category transport with date this expense is reflected on the transport budget undo expected behavior the transport budget should undo its update and have an amount of after the undo command screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here reported by kev inc severity medium | 1 |
677,229 | 23,155,667,612 | IssuesEvent | 2022-07-29 12:46:51 | RoyalHaskoningDHV/sam | https://api.github.com/repos/RoyalHaskoningDHV/sam | closed | BUG: SamQuantileMLP predict_ahead doesn't support Sequence | Priority: High Type: Bug | The type-hint for predict_ahead in SamQuantileMLP is
predict_ahead: Union[int, Sequence[int]] = 1,
But the parent BaseTimeseriesRegressor only supports List:
predict_ahead: Union[int, List[int]] = 1,
```python
>>> predict_ahead = (0,)
>>> isinstance(predict_ahead, Sequence)
True
>>> model = SamQuantileMLP(predict_ahead=predict_ahead)
>>> model.predict_ahead
[(0,)]
>>> model.validate_predict_ahead()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\921266\source\repos\sam\sam\models\base_model.py", line 141, in validate_predict_ahead
if not all([p >= 0 for p in self.predict_ahead]):
File "C:\Users\921266\source\repos\sam\sam\models\base_model.py", line 141, in <listcomp>
if not all([p >= 0 for p in self.predict_ahead]):
TypeError: '>=' not supported between instances of 'tuple' and 'int'
This is caused by the following line:
self.predict_ahead = (
predict_ahead if isinstance(predict_ahead, List) else [predict_ahead]
)
``` | 1.0 | BUG: SamQuantileMLP predict_ahead doesn't support Sequence - The type-hint for predict_ahead in SamQuantileMLP is
predict_ahead: Union[int, Sequence[int]] = 1,
But the parent BaseTimeseriesRegressor only supports List:
predict_ahead: Union[int, List[int]] = 1,
```python
>>> predict_ahead = (0,)
>>> isinstance(predict_ahead, Sequence)
True
>>> model = SamQuantileMLP(predict_ahead=predict_ahead)
>>> model.predict_ahead
[(0,)]
>>> model.validate_predict_ahead()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\921266\source\repos\sam\sam\models\base_model.py", line 141, in validate_predict_ahead
if not all([p >= 0 for p in self.predict_ahead]):
File "C:\Users\921266\source\repos\sam\sam\models\base_model.py", line 141, in <listcomp>
if not all([p >= 0 for p in self.predict_ahead]):
TypeError: '>=' not supported between instances of 'tuple' and 'int'
This is caused by the following line:
self.predict_ahead = (
predict_ahead if isinstance(predict_ahead, List) else [predict_ahead]
)
``` | priority | bug samquantilemlp predict ahead doesn t support sequence the type hint for predict ahead in samquantilemlp is predict ahead union but the parent basetimeseriesregressor only supports list predict ahead union python predict ahead isinstance predict ahead sequence true model samquantilemlp predict ahead predict ahead model predict ahead model validate predict ahead traceback most recent call last file line in file c users source repos sam sam models base model py line in validate predict ahead if not all file c users source repos sam sam models base model py line in if not all typeerror not supported between instances of tuple and int this is caused by the following line self predict ahead predict ahead if isinstance predict ahead list else | 1 |
786,640 | 27,660,945,488 | IssuesEvent | 2023-03-12 14:13:50 | AY2223S2-CS2103T-W11-4/tp | https://api.github.com/repos/AY2223S2-CS2103T-W11-4/tp | closed | As a user I can add a person's medical condition | enhancement priority.High type.Enhancement | As a user, I can add a certain medical condition and add the relevant patients in the same category.
| 1.0 | As a user I can add a person's medical condition - As a user, I can add a certain medical condition and add the relevant patients in the same category.
| priority | as a user i can add a person s medical condition as a user i can add a certain medical condition and add the relevant patients in the same category | 1 |
335,880 | 10,167,904,174 | IssuesEvent | 2019-08-07 19:24:54 | BCcampus/edehr | https://api.github.com/repos/BCcampus/edehr | opened | Activity information not visible until refresh | Effort - Low Priority - High ~Bug | ### Description
When there are two or more activities and an instructor visits the tool only the first activity displays it's title and information. But the user can refresh the browser and this information appears.
**Expected behaviour**
The course listing page displays all activities with their title and information.
| 1.0 | Activity information not visible until refresh - ### Description
When there are two or more activities and an instructor visits the tool only the first activity displays it's title and information. But the user can refresh the browser and this information appears.
**Expected behaviour**
The course listing page displays all activities with their title and information.
| priority | activity information not visible until refresh description when there are two or more activities and an instructor visits the tool only the first activity displays it s title and information but the user can refresh the browser and this information appears expected behaviour the course listing page displays all activities with their title and information | 1 |
575,564 | 17,035,635,585 | IssuesEvent | 2021-07-05 06:39:12 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | scope of improvements in amp optimizer implementation | [Priority: HIGH] bug | Some unused CSS is loading on the page even with Tree shaking enabled.
some scripts are loading on the page, which is not required for that particular page.
For More information contact @MohammedKaludi and @ansaritalha | 1.0 | scope of improvements in amp optimizer implementation - Some unused CSS is loading on the page even with Tree shaking enabled.
some scripts are loading on the page, which is not required for that particular page.
For More information contact @MohammedKaludi and @ansaritalha | priority | scope of improvements in amp optimizer implementation some unused css is loading on the page even with tree shaking enabled some scripts are loading on the page which is not required for that particular page for more information contact mohammedkaludi and ansaritalha | 1 |
329,723 | 10,023,747,406 | IssuesEvent | 2019-07-16 20:01:04 | clearlinux/distribution | https://api.github.com/repos/clearlinux/distribution | closed | [Gnome3] Wrong default application for folders | bug desktop high priority | Opening a folder from the gnome search opens the gnome disk usage analyzer for the respective folder instead of the file browser:
```
robert@clear ~ $ xdg-mime query default inode/directory
org.gnome.baobab.desktop
```
Expected behavior: Open the corresponding folder in nautilus instead.
| 1.0 | [Gnome3] Wrong default application for folders - Opening a folder from the gnome search opens the gnome disk usage analyzer for the respective folder instead of the file browser:
```
robert@clear ~ $ xdg-mime query default inode/directory
org.gnome.baobab.desktop
```
Expected behavior: Open the corresponding folder in nautilus instead.
| priority | wrong default application for folders opening a folder from the gnome search opens the gnome disk usage analyzer for the respective folder instead of the file browser robert clear xdg mime query default inode directory org gnome baobab desktop expected behavior open the corresponding folder in nautilus instead | 1 |
253,764 | 8,065,650,443 | IssuesEvent | 2018-08-04 04:29:09 | sunwukonga/BBB | https://api.github.com/repos/sunwukonga/BBB | closed | [HomeScreen] -> [SearchResultScreen] Currently set to dummy page | high priority | 1. Remove dummy data
2. Pass search data to [SearchResultScreen]
3. Set off searchListing query | 1.0 | [HomeScreen] -> [SearchResultScreen] Currently set to dummy page - 1. Remove dummy data
2. Pass search data to [SearchResultScreen]
3. Set off searchListing query | priority | currently set to dummy page remove dummy data pass search data to set off searchlisting query | 1 |
637,985 | 20,692,630,881 | IssuesEvent | 2022-03-11 03:04:36 | AY2122S2-CS2113-F10-4/tp | https://api.github.com/repos/AY2122S2-CS2113-F10-4/tp | closed | Delete for contacts tracker | type.Story priority.High | As a user, I want to be able to delete a contact in case I make mistakes
| 1.0 | Delete for contacts tracker - As a user, I want to be able to delete a contact in case I make mistakes
| priority | delete for contacts tracker as a user i want to be able to delete a contact in case i make mistakes | 1 |
308,744 | 9,449,437,647 | IssuesEvent | 2019-04-16 01:54:28 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Wrong derivative backpropagating through Cholesky factorization? | high priority module: operators topic: derivatives triaged | ## 🐛 Bug
I am getting different derivatives when I compute `g(A)` via `A -> g(A)` versus `A -> LL' -> g(LL')`. The derivatives are off in a predictable way, and it's easy to correct (~2 lines of code), leading me to believe this is a bug.
## To Reproduce
Steps to reproduce the behavior:
```python
import torch
mat = torch.randn(4, 4, dtype=torch.float64)
mat = (mat @ mat.transpose(-1, -2)).div_(5).add_(torch.eye(4, dtype=torch.float64))
mat = mat.detach().clone().requires_grad_(True)
mat_clone = mat.detach().clone().requires_grad_(True)
# Way 1
inv_mat1 = mat_clone.inverse() # A^{-1} = A^{-1}
# Way 2
chol_mat = mat.cholesky()
chol_inv_mat = chol_mat.inverse().transpose(-2, -1)
inv_mat2 = chol_inv_mat @ chol_inv_mat.transpose(-2, -1) # A^{-1} = L^{-T}L^{-1}
# True
print('Are these both A^{-1}?', bool(torch.norm(inv_mat1 - inv_mat2) < 1e-8))
inv_mat1.trace().backward()
inv_mat2.trace().backward()
print('Way 1\n', mat_clone.grad)
print('Way 2\n', mat.grad) # :-(
corrected_deriv = mat.grad.clone() / 2
corrected_deriv = corrected_deriv.tril() + corrected_deriv.tril().t()
print('Corrected derivative\n', corrected_deriv) # Simple correction to derivative works.
# True
print('Is the corrected derivative correct?', bool(torch.norm(corrected_deriv - mat_clone.grad) < 1e-8))
```
## Expected behavior
Calling `inv_mat2.trace().backward()` should produce `corrected_deriv` in `mat.grad`, not what is currently going there. | 1.0 | Wrong derivative backpropagating through Cholesky factorization? - ## 🐛 Bug
I am getting different derivatives when I compute `g(A)` via `A -> g(A)` versus `A -> LL' -> g(LL')`. The derivatives are off in a predictable way, and it's easy to correct (~2 lines of code), leading me to believe this is a bug.
## To Reproduce
Steps to reproduce the behavior:
```python
import torch
mat = torch.randn(4, 4, dtype=torch.float64)
mat = (mat @ mat.transpose(-1, -2)).div_(5).add_(torch.eye(4, dtype=torch.float64))
mat = mat.detach().clone().requires_grad_(True)
mat_clone = mat.detach().clone().requires_grad_(True)
# Way 1
inv_mat1 = mat_clone.inverse() # A^{-1} = A^{-1}
# Way 2
chol_mat = mat.cholesky()
chol_inv_mat = chol_mat.inverse().transpose(-2, -1)
inv_mat2 = chol_inv_mat @ chol_inv_mat.transpose(-2, -1) # A^{-1} = L^{-T}L^{-1}
# True
print('Are these both A^{-1}?', bool(torch.norm(inv_mat1 - inv_mat2) < 1e-8))
inv_mat1.trace().backward()
inv_mat2.trace().backward()
print('Way 1\n', mat_clone.grad)
print('Way 2\n', mat.grad) # :-(
corrected_deriv = mat.grad.clone() / 2
corrected_deriv = corrected_deriv.tril() + corrected_deriv.tril().t()
print('Corrected derivative\n', corrected_deriv) # Simple correction to derivative works.
# True
print('Is the corrected derivative correct?', bool(torch.norm(corrected_deriv - mat_clone.grad) < 1e-8))
```
## Expected behavior
Calling `inv_mat2.trace().backward()` should produce `corrected_deriv` in `mat.grad`, not what is currently going there. | priority | wrong derivative backpropagating through cholesky factorization 🐛 bug i am getting different derivatives when i compute g a via a g a versus a ll g ll the derivatives are off in a predictable way and it s easy to correct lines of code leading me to believe this is a bug to reproduce steps to reproduce the behavior python import torch mat torch randn dtype torch mat mat mat transpose div add torch eye dtype torch mat mat detach clone requires grad true mat clone mat detach clone requires grad true way inv mat clone inverse a a way chol mat mat cholesky chol inv mat chol mat inverse transpose inv chol inv mat chol inv mat transpose a l t l true print are these both a bool torch norm inv inv inv trace backward inv trace backward print way n mat clone grad print way n mat grad corrected deriv mat grad clone corrected deriv corrected deriv tril corrected deriv tril t print corrected derivative n corrected deriv simple correction to derivative works true print is the corrected derivative correct bool torch norm corrected deriv mat clone grad expected behavior calling inv trace backward should produce corrected deriv in mat grad not what is currently going there | 1 |
80,017 | 3,549,574,437 | IssuesEvent | 2016-01-20 18:33:15 | Valhalla-Gaming/Tracker | https://api.github.com/repos/Valhalla-Gaming/Tracker | closed | [brewmaster] guard | Class-Monk Priority-High Type-Spell | "http://www.wowhead.com/spell=115295/guard
how it should work: consumes 2 chi and absorbing [1 * (Attack power * 18) * (1 + $versadmg)] damage.
what happens: It consumes 2 chi but the absorbed dmg is much too less (ca. 10%)." | 1.0 | [brewmaster] guard - "http://www.wowhead.com/spell=115295/guard
how it should work: consumes 2 chi and absorbing [1 * (Attack power * 18) * (1 + $versadmg)] damage.
what happens: It consumes 2 chi but the absorbed dmg is much too less (ca. 10%)." | priority | guard how it should work consumes chi and absorbing damage what happens it consumes chi but the absorbed dmg is much too less ca | 1 |
212,293 | 7,235,431,741 | IssuesEvent | 2018-02-13 00:40:54 | wedeploy/marble | https://api.github.com/repos/wedeploy/marble | closed | Blog bullet points unaligned on Edge and IE11 | 1 - high priority bug | Edge version: 41.16299.15.0
<img width="712" alt="screen shot 2018-02-09 at 2 11 56 pm" src="https://user-images.githubusercontent.com/23219848/36052687-8985c8b0-0da3-11e8-93dd-50efee87e2b6.png">
| 1.0 | Blog bullet points unaligned on Edge and IE11 - Edge version: 41.16299.15.0
<img width="712" alt="screen shot 2018-02-09 at 2 11 56 pm" src="https://user-images.githubusercontent.com/23219848/36052687-8985c8b0-0da3-11e8-93dd-50efee87e2b6.png">
| priority | blog bullet points unaligned on edge and edge version img width alt screen shot at pm src | 1 |
795,982 | 28,095,301,809 | IssuesEvent | 2023-03-30 15:24:32 | AY2223S2-CS2103T-W12-4/tp | https://api.github.com/repos/AY2223S2-CS2103T-W12-4/tp | closed | View patient details | type.Story priority.High | As a doctor, I can view patient details of a patient so that I can see everything I want and need to know about them at a glance. | 1.0 | View patient details - As a doctor, I can view patient details of a patient so that I can see everything I want and need to know about them at a glance. | priority | view patient details as a doctor i can view patient details of a patient so that i can see everything i want and need to know about them at a glance | 1 |
643,210 | 20,925,943,637 | IssuesEvent | 2022-03-24 22:57:04 | gilhrpenner/COMP4350 | https://api.github.com/repos/gilhrpenner/COMP4350 | opened | Return dummy URL of photos when in DEV mode | dev task high priority | ## Description
We are currently hosting our images with AWS S3, the free tier has a limit of 2k requests per month, since we are constantly developing and testing, this limit easily can be reached in a couple of days which is bad! So in order to not waste money we need to return dummy URLs while in dev mode.
## Acceptance Criteria
- production mode should still show the image hosted on S3
| 1.0 | Return dummy URL of photos when in DEV mode - ## Description
We are currently hosting our images with AWS S3, the free tier has a limit of 2k requests per month, since we are constantly developing and testing, this limit easily can be reached in a couple of days which is bad! So in order to not waste money we need to return dummy URLs while in dev mode.
## Acceptance Criteria
- production mode should still show the image hosted on S3
| priority | return dummy url of photos when in dev mode description we are currently hosting our images with aws the free tier has a limit of requests per month since we are constantly developing and testing this limit easily can be reached in a couple of days which is bad so in order to not waste money we need to return dummy urls while in dev mode acceptance criteria production mode should still show the image hosted on | 1 |
484,214 | 13,936,349,156 | IssuesEvent | 2020-10-22 12:49:50 | tomav/docker-mailserver | https://api.github.com/repos/tomav/docker-mailserver | closed | Build-Push-Action / Multiarch | enhancement frozen due to age help wanted kubernetes priority 1 [HIGH] roadmap | <!--- Provide a general summary of the issue in the Title above -->
When starting the server, got `standard_init_linux.go:190: exec user process caused "exec format error"`.
## Context
<!--- Provide a more detailed introduction to the issue itself -->
<!--- How has this issue affected you? What were you trying to accomplish? -->
Install docker and docker-compose on raspberry (OS raspbian), install and start the server
## Expected Behavior
<!--- Tell us what should happen -->
Should start
## Actual Behavior
<!--- Tell us what happens instead -->
Not starting
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the issue -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the issue in -->
* Amount of RAM available: 2GB
* Mailserver version used: master
* Docker version used: 18.09.0
* Environment settings relevant to the config: Rasberry (arm) on raspbian
* Any relevant stack traces ("Full trace" preferred):
```
standard_init_linux.go:190: exec user process caused "exec format error"
```
| 1.0 | Build-Push-Action / Multiarch - <!--- Provide a general summary of the issue in the Title above -->
When starting the server, got `standard_init_linux.go:190: exec user process caused "exec format error"`.
## Context
<!--- Provide a more detailed introduction to the issue itself -->
<!--- How has this issue affected you? What were you trying to accomplish? -->
Install docker and docker-compose on raspberry (OS raspbian), install and start the server
## Expected Behavior
<!--- Tell us what should happen -->
Should start
## Actual Behavior
<!--- Tell us what happens instead -->
Not starting
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the issue -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the issue in -->
* Amount of RAM available: 2GB
* Mailserver version used: master
* Docker version used: 18.09.0
* Environment settings relevant to the config: Rasberry (arm) on raspbian
* Any relevant stack traces ("Full trace" preferred):
```
standard_init_linux.go:190: exec user process caused "exec format error"
```
| priority | build push action multiarch when starting the server got standard init linux go exec user process caused exec format error context install docker and docker compose on raspberry os raspbian install and start the server expected behavior should start actual behavior not starting possible fix your environment amount of ram available mailserver version used master docker version used environment settings relevant to the config rasberry arm on raspbian any relevant stack traces full trace preferred standard init linux go exec user process caused exec format error | 1 |
318,284 | 9,690,307,671 | IssuesEvent | 2019-05-24 08:18:45 | status-im/status-react | https://api.github.com/repos/status-im/status-react | opened | Can't attach logs to email draft on android | android bug high-priority low-severity | # Problem
Error `can't add attachment` shown when trying to send logs on Android (7 and 8 at least). User can't send logs.
## Steps
1. install status and create account
2. profile -> dev mode -> send logs -> gmail

## Build
Reproduced in nightly build https://status-im.ams3.digitaloceanspaces.com/StatusIm-190524-025900-ee1277-nightly.apk
Devices: Android 7 (Xiaomi), Android 8 (Huawei)
| 1.0 | Can't attach logs to email draft on android - # Problem
Error `can't add attachment` shown when trying to send logs on Android (7 and 8 at least). User can't send logs.
## Steps
1. install status and create account
2. profile -> dev mode -> send logs -> gmail

## Build
Reproduced in nightly build https://status-im.ams3.digitaloceanspaces.com/StatusIm-190524-025900-ee1277-nightly.apk
Devices: Android 7 (Xiaomi), Android 8 (Huawei)
| priority | can t attach logs to email draft on android problem error can t add attachment shown when trying to send logs on android and at least user can t send logs steps install status and create account profile dev mode send logs gmail build reproduced in nightly build devices android xiaomi android huawei | 1 |
458,396 | 13,174,529,290 | IssuesEvent | 2020-08-11 22:44:06 | IslasGECI/dimorfismo | https://api.github.com/repos/IslasGECI/dimorfismo | closed | Valores mínimos y máximos están al revés en `best_logistic_model_parameters_laal_ig.json` | Priority: High Status: Available Type: Bug | Abajo se puede ver que el valor mínimo del alto del pico es más grande que el valor máximo y los mismo con el tarso:
```
{
"parametrosNormalizacion": {
"valorMinimo": {
"longitudCraneo": [3.7037],
"altoPico": [165.58],
"longitudPico": [29.67],
"tarso": [101.78]
},
"valorMaximo": {
"longitudCraneo": [83.18],
"altoPico": [46.5],
"longitudPico": [193.22],
"tarso": [35.44]
}
},
"parametrosModelo": [
{
"Variables": "(Intercept)",
"Estimate": -18.948,
"_row": "(Intercept)"
},
{
"Variables": "longitudCraneo",
"Estimate": 6.576,
"_row": "longitudCraneo"
},
{
"Variables": "altoPico",
"Estimate": 8.816,
"_row": "altoPico"
},
{
"Variables": "longitudPico",
"Estimate": 7.172,
"_row": "longitudPico"
},
{
"Variables": "tarso",
"Estimate": 5.726,
"_row": "tarso"
}
]
}
``` | 1.0 | Valores mínimos y máximos están al revés en `best_logistic_model_parameters_laal_ig.json` - Abajo se puede ver que el valor mínimo del alto del pico es más grande que el valor máximo y los mismo con el tarso:
```
{
"parametrosNormalizacion": {
"valorMinimo": {
"longitudCraneo": [3.7037],
"altoPico": [165.58],
"longitudPico": [29.67],
"tarso": [101.78]
},
"valorMaximo": {
"longitudCraneo": [83.18],
"altoPico": [46.5],
"longitudPico": [193.22],
"tarso": [35.44]
}
},
"parametrosModelo": [
{
"Variables": "(Intercept)",
"Estimate": -18.948,
"_row": "(Intercept)"
},
{
"Variables": "longitudCraneo",
"Estimate": 6.576,
"_row": "longitudCraneo"
},
{
"Variables": "altoPico",
"Estimate": 8.816,
"_row": "altoPico"
},
{
"Variables": "longitudPico",
"Estimate": 7.172,
"_row": "longitudPico"
},
{
"Variables": "tarso",
"Estimate": 5.726,
"_row": "tarso"
}
]
}
``` | priority | valores mínimos y máximos están al revés en best logistic model parameters laal ig json abajo se puede ver que el valor mínimo del alto del pico es más grande que el valor máximo y los mismo con el tarso parametrosnormalizacion valorminimo longitudcraneo altopico longitudpico tarso valormaximo longitudcraneo altopico longitudpico tarso parametrosmodelo variables intercept estimate row intercept variables longitudcraneo estimate row longitudcraneo variables altopico estimate row altopico variables longitudpico estimate row longitudpico variables tarso estimate row tarso | 1 |
801,374 | 28,485,807,362 | IssuesEvent | 2023-04-18 07:47:33 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | Guild reputation from killing raid bosses | Status: Need Info Status: Confirmed Priority: High | If you have guild raid group each killed boss should reward with 60 guild reputation. Currently there is no guild reputation given after killing raid bosses.
Source:
https://youtu.be/uqvmBnxUabE?t=294
https://youtu.be/h6fGunTR068?t=509 | 1.0 | Guild reputation from killing raid bosses - If you have guild raid group each killed boss should reward with 60 guild reputation. Currently there is no guild reputation given after killing raid bosses.
Source:
https://youtu.be/uqvmBnxUabE?t=294
https://youtu.be/h6fGunTR068?t=509 | priority | guild reputation from killing raid bosses if you have guild raid group each killed boss should reward with guild reputation currently there is no guild reputation given after killing raid bosses source | 1 |
710,953 | 24,445,201,973 | IssuesEvent | 2022-10-06 17:20:32 | fyusuf-a/ft_transcendence | https://api.github.com/repos/fyusuf-a/ft_transcendence | closed | Chat messages don't appear in DMs when they are first created | bug frontend HIGH PRIORITY | ### To Reproduce
- Create a new DM by clicking the plus in the ChannelList, clicking DM in the ChannelJoinDialog, and then entering a valid username in the text field and selecting the user from the list that appears
- The ChatWindow should open automatically
- Enter a message in the text area and submit it
- The message is successfully sent, but does not appear unless the channel is refreshed.
### Expected Behavior
- The message should appear when sent | 1.0 | Chat messages don't appear in DMs when they are first created - ### To Reproduce
- Create a new DM by clicking the plus in the ChannelList, clicking DM in the ChannelJoinDialog, and then entering a valid username in the text field and selecting the user from the list that appears
- The ChatWindow should open automatically
- Enter a message in the text area and submit it
- The message is successfully sent, but does not appear unless the channel is refreshed.
### Expected Behavior
- The message should appear when sent | priority | chat messages don t appear in dms when they are first created to reproduce create a new dm by clicking the plus in the channellist clicking dm in the channeljoindialog and then entering a valid username in the text field and selecting the user from the list that appears the chatwindow should open automatically enter a message in the text area and submit it the message is successfully sent but does not appear unless the channel is refreshed expected behavior the message should appear when sent | 1 |
633,115 | 20,245,312,993 | IssuesEvent | 2022-02-14 13:13:09 | PoProstuMieciek/wikipedia-scraper | https://api.github.com/repos/PoProstuMieciek/wikipedia-scraper | closed | feat/html-parser | priority: high type: feat | **AC**
- function:
- [ ] gets a html string
- [ ] parses the string using [`jsdom`](https://www.npmjs.com/package/jsdom) package
- [ ] returns `JsDom` instance
| 1.0 | feat/html-parser - **AC**
- function:
- [ ] gets a html string
- [ ] parses the string using [`jsdom`](https://www.npmjs.com/package/jsdom) package
- [ ] returns `JsDom` instance
| priority | feat html parser ac function gets a html string parses the string using package returns jsdom instance | 1 |
578,142 | 17,145,212,770 | IssuesEvent | 2021-07-13 13:58:44 | gitpod-io/gitpod | https://api.github.com/repos/gitpod-io/gitpod | closed | mysql does not come up bc of missing resource requests | dev experience priority: high (dev loop impact) type: bug | ### Bug description
The mysql container does not request any resources, and thus gets terminated in some situations ([here](https://console.cloud.google.com/kubernetes/pod/europe-west1-b/dev/staging-gpl-headless-log-content/mysql-0/details?project=gitpod-core-dev) and [here](https://console.cloud.google.com/kubernetes/pod/europe-west1-b/dev/staging-csweichel-ws-daemon-fails-mounting-4784/mysql-0/yaml/view?project=gitpod-core-dev)).
This results in broken deployments (timeout) and CrashLoopBackoffs, blocking those environments.
### Steps to reproduce
Start enough preview environments.
### Expected behavior
_No response_
### Example repository
_No response_
### Anything else?
_No response_ | 1.0 | mysql does not come up bc of missing resource requests - ### Bug description
The mysql container does not request any resources, and thus gets terminated in some situations ([here](https://console.cloud.google.com/kubernetes/pod/europe-west1-b/dev/staging-gpl-headless-log-content/mysql-0/details?project=gitpod-core-dev) and [here](https://console.cloud.google.com/kubernetes/pod/europe-west1-b/dev/staging-csweichel-ws-daemon-fails-mounting-4784/mysql-0/yaml/view?project=gitpod-core-dev)).
This results in broken deployments (timeout) and CrashLoopBackoffs, blocking those environments.
### Steps to reproduce
Start enough preview environments.
### Expected behavior
_No response_
### Example repository
_No response_
### Anything else?
_No response_ | priority | mysql does not come up bc of missing resource requests bug description the mysql container does not request any resources and thus gets terminated in some situations and this results in broken deployments timeout and crashloopbackoffs blocking those environments steps to reproduce start enough preview environments expected behavior no response example repository no response anything else no response | 1 |
629,568 | 20,036,263,486 | IssuesEvent | 2022-02-02 12:14:38 | eGirlsAreRuiningMyAC/IoT-AC | https://api.github.com/repos/eGirlsAreRuiningMyAC/IoT-AC | closed | Adjust device brightness | high priority 1p closed feature | Acceptance criteria: user is able to modify the device's brightness through our app | 1.0 | Adjust device brightness - Acceptance criteria: user is able to modify the device's brightness through our app | priority | adjust device brightness acceptance criteria user is able to modify the device s brightness through our app | 1 |
720,547 | 24,796,608,862 | IssuesEvent | 2022-10-24 17:50:30 | wazuh/wazuh-documentation | https://api.github.com/repos/wazuh/wazuh-documentation | closed | 5.0 Getting started Architecture rework | priority: highest type: refactor | Hello team!
The aim of this issue is to adapt the Architecture section in the Getting started for Wazuh 5.0.
We must also change the diagrams of the section.
Regards,
David | 1.0 | 5.0 Getting started Architecture rework - Hello team!
The aim of this issue is to adapt the Architecture section in the Getting started for Wazuh 5.0.
We must also change the diagrams of the section.
Regards,
David | priority | getting started architecture rework hello team the aim of this issue is to adapt the architecture section in the getting started for wazuh we must also change the diagrams of the section regards david | 1 |
303,033 | 9,301,482,892 | IssuesEvent | 2019-03-23 22:28:11 | codeforbtv/green-up-app | https://api.github.com/repos/codeforbtv/green-up-app | closed | Login fails without notifying user. | Priority: High Type: Bug V2 | To reproduce, login via Facebook with an account that already exists with the same email address but different sign-in credentials.
After seeing a flash of "Seeing green thoughts", the user is redirected back to the login screen without a message that the login failed.
User should receive a message that the login failed and if possible, with the reason why. | 1.0 | Login fails without notifying user. - To reproduce, login via Facebook with an account that already exists with the same email address but different sign-in credentials.
After seeing a flash of "Seeing green thoughts", the user is redirected back to the login screen without a message that the login failed.
User should receive a message that the login failed and if possible, with the reason why. | priority | login fails without notifying user to reproduce login via facebook with an account that already exists with the same email address but different sign in credentials after seeing a flash of seeing green thoughts the user is redirected back to the login screen without a message that the login failed user should receive a message that the login failed and if possible with the reason why | 1 |
460,136 | 13,205,336,544 | IssuesEvent | 2020-08-14 17:44:08 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Prototype implementation example script generation fails | Priority/High REST_API Type/Bug Type/React-UI | ### Description:
After creating an API with https://petstore.swagger.io/v2/swagger.json, below error log is printed in carbon logs when selecting the "Prototype Implementation" option.
```
com.fasterxml.jackson.core.JsonGenerationException: Can not write a string, expecting field name (context: Object)
at com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:2080)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._reportCantWriteValueExpectName(JsonGeneratorImpl.java:248)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._verifyPrettyValueWrite(JsonGeneratorImpl.java:238)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._verifyValueWrite(WriterBasedJsonGenerator.java:894)
...
..
at com.fasterxml.jackson.databind.ObjectWriter.writeValueAsString(ObjectWriter.java:1005)
at io.swagger.util.Json.pretty(Json.java:23)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.getSchemaExample_aroundBody6(OAS2Parser.java:206)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.getSchemaExample(OAS2Parser.java:202)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.generateExample_aroundBody4(OAS2Parser.java:170)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.generateExample(OAS2Parser.java:115)
at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateExamples_aroundBody4(OASParserUtil.java:226)
at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateExamples(OASParserUtil.java:220)
at org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl.getGeneratedMockScriptsOfAPI(ApisApiServiceImpl.java:2186)
at org.wso2.carbon.apimgt.rest.api.publisher.v1.ApisApi.getGeneratedMockScriptsOfAPI(ApisApi.java:1050)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
```
Prototype Endpoint implementation scripts are not generated as expected. Only a simple script is generated, as below.
```
/* mc.setProperty('CONTENT_TYPE', 'application/json');
mc.setPayloadJSON('{ "data" : "sample JSON"}');*/
/*Uncomment the above comment block to send a sample response.*/
```
But when we do some change in a script and click on "Reset" then an advanced script is generated. But then if we try to save this, it will throw the same error in carbon logs. Anyway, there is no any issue when invoking the apis.
BTW this swagger file doesn't give backend errors. https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v2.0/yaml/petstore-expanded.yaml But no advance script is gennerated for this one too.
### Steps to reproduce:
### Affected Product Version:
apim-3.2.0-RC4
| 1.0 | Prototype implementation example script generation fails - ### Description:
After creating an API with https://petstore.swagger.io/v2/swagger.json, below error log is printed in carbon logs when selecting the "Prototype Implementation" option.
```
com.fasterxml.jackson.core.JsonGenerationException: Can not write a string, expecting field name (context: Object)
at com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:2080)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._reportCantWriteValueExpectName(JsonGeneratorImpl.java:248)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._verifyPrettyValueWrite(JsonGeneratorImpl.java:238)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._verifyValueWrite(WriterBasedJsonGenerator.java:894)
...
..
at com.fasterxml.jackson.databind.ObjectWriter.writeValueAsString(ObjectWriter.java:1005)
at io.swagger.util.Json.pretty(Json.java:23)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.getSchemaExample_aroundBody6(OAS2Parser.java:206)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.getSchemaExample(OAS2Parser.java:202)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.generateExample_aroundBody4(OAS2Parser.java:170)
at org.wso2.carbon.apimgt.impl.definitions.OAS2Parser.generateExample(OAS2Parser.java:115)
at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateExamples_aroundBody4(OASParserUtil.java:226)
at org.wso2.carbon.apimgt.impl.definitions.OASParserUtil.generateExamples(OASParserUtil.java:220)
at org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl.getGeneratedMockScriptsOfAPI(ApisApiServiceImpl.java:2186)
at org.wso2.carbon.apimgt.rest.api.publisher.v1.ApisApi.getGeneratedMockScriptsOfAPI(ApisApi.java:1050)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
```
Prototype Endpoint implementation scripts are not generated as expected. Only a simple script is generated, as below.
```
/* mc.setProperty('CONTENT_TYPE', 'application/json');
mc.setPayloadJSON('{ "data" : "sample JSON"}');*/
/*Uncomment the above comment block to send a sample response.*/
```
But when we do some change in a script and click on "Reset" then an advanced script is generated. But then if we try to save this, it will throw the same error in carbon logs. Anyway, there is no any issue when invoking the apis.
BTW this swagger file doesn't give backend errors. https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v2.0/yaml/petstore-expanded.yaml But no advance script is gennerated for this one too.
### Steps to reproduce:
### Affected Product Version:
apim-3.2.0-RC4
| priority | prototype implementation example script generation fails description after creating an api with below error log is printed in carbon logs when selecting the prototype implementation option com fasterxml jackson core jsongenerationexception can not write a string expecting field name context object at com fasterxml jackson core jsongenerator reporterror jsongenerator java at com fasterxml jackson core json jsongeneratorimpl reportcantwritevalueexpectname jsongeneratorimpl java at com fasterxml jackson core json jsongeneratorimpl verifyprettyvaluewrite jsongeneratorimpl java at com fasterxml jackson core json writerbasedjsongenerator verifyvaluewrite writerbasedjsongenerator java at com fasterxml jackson databind objectwriter writevalueasstring objectwriter java at io swagger util json pretty json java at org carbon apimgt impl definitions getschemaexample java at org carbon apimgt impl definitions getschemaexample java at org carbon apimgt impl definitions generateexample java at org carbon apimgt impl definitions generateexample java at org carbon apimgt impl definitions oasparserutil generateexamples oasparserutil java at org carbon apimgt impl definitions oasparserutil generateexamples oasparserutil java at org carbon apimgt rest api publisher impl apisapiserviceimpl getgeneratedmockscriptsofapi apisapiserviceimpl java at org carbon apimgt rest api publisher apisapi getgeneratedmockscriptsofapi apisapi java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java prototype endpoint implementation scripts are not generated as expected only a simple script is generated as below mc setproperty content type application json mc setpayloadjson data sample json uncomment the above comment block to send a sample response but when we do some change in a script and click on reset then an advanced script is generated but then if we try to save this it will throw the same error in carbon logs anyway there is no any issue when invoking the apis btw this swagger file doesn t give backend errors but no advance script is gennerated for this one too steps to reproduce affected product version apim | 1 |
779,168 | 27,342,308,637 | IssuesEvent | 2023-02-26 23:02:28 | PlanktonTeam/planktonr | https://api.github.com/repos/PlanktonTeam/planktonr | closed | Problem with Sample_Depth in pr_get_NRSTrips | bug high priority | There is an option in `pr_get_NRSTrips` to use all of `"P", "Z" and "F"` to retrieve all the data but this causes trouble because the sample_depth is renamed depending on what is retrieved.
I suggest we only allow a single option. What do you think @clairedavies ? * But see alternative below.
In `pr_get_bgc` this also introduces problems with multiple sample_depths at the compilation stage at the end.
An alternative option to above, which will also solve the `pr_get_bgc` problem (I think) is to change output of `pr_get_NRSTrips` to long format so the biomass, AFW etc of Phyto, Zoo and Fish are returned in rows, rather than columns. This reduces the confusion about the sample_depth column names.
| 1.0 | Problem with Sample_Depth in pr_get_NRSTrips - There is an option in `pr_get_NRSTrips` to use all of `"P", "Z" and "F"` to retrieve all the data but this causes trouble because the sample_depth is renamed depending on what is retrieved.
I suggest we only allow a single option. What do you think @clairedavies ? * But see alternative below.
In `pr_get_bgc` this also introduces problems with multiple sample_depths at the compilation stage at the end.
An alternative option to above, which will also solve the `pr_get_bgc` problem (I think) is to change output of `pr_get_NRSTrips` to long format so the biomass, AFW etc of Phyto, Zoo and Fish are returned in rows, rather than columns. This reduces the confusion about the sample_depth column names.
| priority | problem with sample depth in pr get nrstrips there is an option in pr get nrstrips to use all of p z and f to retrieve all the data but this causes trouble because the sample depth is renamed depending on what is retrieved i suggest we only allow a single option what do you think clairedavies but see alternative below in pr get bgc this also introduces problems with multiple sample depths at the compilation stage at the end an alternative option to above which will also solve the pr get bgc problem i think is to change output of pr get nrstrips to long format so the biomass afw etc of phyto zoo and fish are returned in rows rather than columns this reduces the confusion about the sample depth column names | 1 |
338,920 | 10,239,418,019 | IssuesEvent | 2019-08-19 18:12:01 | EUCweb/BIS-F | https://api.github.com/repos/EUCweb/BIS-F | opened | Detect WVD Multi User | Priority: High | During BIS-F Sealing, it's necassary to detect WVD Multi User for BIS-F System Management and not run sysprep.
| 1.0 | Detect WVD Multi User - During BIS-F Sealing, it's necassary to detect WVD Multi User for BIS-F System Management and not run sysprep.
| priority | detect wvd multi user during bis f sealing it s necassary to detect wvd multi user for bis f system management and not run sysprep | 1 |
645,789 | 21,015,341,927 | IssuesEvent | 2022-03-30 10:27:36 | bitfoundation/bitframework | https://api.github.com/repos/bitfoundation/bitframework | closed | Missing 2-way bound `SelectedKey` implementations in `BitPivot` component | bug area / components high priority | The SelectItem process implemented in the BitPivot component is not a complete and correct 2-way bound process. it finally needs to have a call to the `SelectedKeyChanged` handler in order to finish the process which is missing now. | 1.0 | Missing 2-way bound `SelectedKey` implementations in `BitPivot` component - The SelectItem process implemented in the BitPivot component is not a complete and correct 2-way bound process. it finally needs to have a call to the `SelectedKeyChanged` handler in order to finish the process which is missing now. | priority | missing way bound selectedkey implementations in bitpivot component the selectitem process implemented in the bitpivot component is not a complete and correct way bound process it finally needs to have a call to the selectedkeychanged handler in order to finish the process which is missing now | 1 |
96,184 | 3,966,094,470 | IssuesEvent | 2016-05-03 11:25:32 | OCHA-DAP/liverpool16 | https://api.github.com/repos/OCHA-DAP/liverpool16 | closed | Map Explorer: Powerview load via url | enhancement High Priority | allow map explorer to load a powerview as json config (?url=....) (geojson, powerview) | 1.0 | Map Explorer: Powerview load via url - allow map explorer to load a powerview as json config (?url=....) (geojson, powerview) | priority | map explorer powerview load via url allow map explorer to load a powerview as json config url geojson powerview | 1 |
600,493 | 18,297,994,500 | IssuesEvent | 2021-10-05 22:36:04 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Panorama console panel | Type: Feature Priority: High Where: Engine Size: Medium | Implement an ingame console based on Panorama to solve the input issues arising from having VGUI and Panorama active simultaneously
Relevant issues:
- #1355
- #1393
- #1178
- #1479
- #1420
Planned features:
- [x] Main panel
- [x] Receive console messages
- [x] Send console commands
- [x] Autocomplete (maybe a bit cooler?)
- [x] Draggable for move and resize (resize maybe not at every panel edge for simplicity's sake, c.f. Dota 2)
- [x] Proper input handling
- [x] Keep functionality separate from design principle (i.e. allow quake-style console if desired)
- [x] HUD panel (developer 1 and con_drawnotify 1)
- [x] work like VGUI
- [x] General considerations
- [x] Bunch up identical messages (maybe something more elaborate down the line, just `Missing material xyz (x7)` for now)
- [x] Try to prevent large perf impact on high message load
- [x] Filtering
| 1.0 | Panorama console panel - Implement an ingame console based on Panorama to solve the input issues arising from having VGUI and Panorama active simultaneously
Relevant issues:
- #1355
- #1393
- #1178
- #1479
- #1420
Planned features:
- [x] Main panel
- [x] Receive console messages
- [x] Send console commands
- [x] Autocomplete (maybe a bit cooler?)
- [x] Draggable for move and resize (resize maybe not at every panel edge for simplicity's sake, c.f. Dota 2)
- [x] Proper input handling
- [x] Keep functionality separate from design principle (i.e. allow quake-style console if desired)
- [x] HUD panel (developer 1 and con_drawnotify 1)
- [x] work like VGUI
- [x] General considerations
- [x] Bunch up identical messages (maybe something more elaborate down the line, just `Missing material xyz (x7)` for now)
- [x] Try to prevent large perf impact on high message load
- [x] Filtering
| priority | panorama console panel implement an ingame console based on panorama to solve the input issues arising from having vgui and panorama active simultaneously relevant issues planned features main panel receive console messages send console commands autocomplete maybe a bit cooler draggable for move and resize resize maybe not at every panel edge for simplicity s sake c f dota proper input handling keep functionality separate from design principle i e allow quake style console if desired hud panel developer and con drawnotify work like vgui general considerations bunch up identical messages maybe something more elaborate down the line just missing material xyz for now try to prevent large perf impact on high message load filtering | 1 |
35,999 | 2,794,530,361 | IssuesEvent | 2015-05-11 17:10:59 | TresysTechnology/clip | https://api.github.com/repos/TresysTechnology/clip | closed | local packages should be rebuilt only when changes have been made | bug High Priority | The build system is rebuilding the packages/ directory every time we run make. We need a better way to determine if there are changes within that directory, and then rebuild the rpms only in that case. | 1.0 | local packages should be rebuilt only when changes have been made - The build system is rebuilding the packages/ directory every time we run make. We need a better way to determine if there are changes within that directory, and then rebuild the rpms only in that case. | priority | local packages should be rebuilt only when changes have been made the build system is rebuilding the packages directory every time we run make we need a better way to determine if there are changes within that directory and then rebuild the rpms only in that case | 1 |
5,158 | 2,572,193,281 | IssuesEvent | 2015-02-10 21:00:33 | boxkite/ckanext-donneesqctheme | https://api.github.com/repos/boxkite/ckanext-donneesqctheme | opened | Add fields at organization level: contact email | High Priority | Add a field to provide a contact email at the organisation level. | 1.0 | Add fields at organization level: contact email - Add a field to provide a contact email at the organisation level. | priority | add fields at organization level contact email add a field to provide a contact email at the organisation level | 1 |
160,834 | 6,103,433,935 | IssuesEvent | 2017-06-20 18:42:06 | AZMAG/map-Employment | https://api.github.com/repos/AZMAG/map-Employment | closed | Cluster Definitions Excel Export Sheet is outdated | maintenance Priority: High | this sheet is outdated and needs to be replaced. waiting on updated sheet!
| 1.0 | Cluster Definitions Excel Export Sheet is outdated - this sheet is outdated and needs to be replaced. waiting on updated sheet!
| priority | cluster definitions excel export sheet is outdated this sheet is outdated and needs to be replaced waiting on updated sheet | 1 |
545,290 | 15,947,626,286 | IssuesEvent | 2021-04-15 03:58:32 | eliasfang/sp21-cse110-lab3 | https://api.github.com/repos/eliasfang/sp21-cse110-lab3 | opened | Style text | Level: 1 Priority: High Status: Backlog Type: Feature | **Is your feature request related to a problem? Please describe.**
All of the text of the page looks the same with very minimal size and style differences.
**Describe the solution you'd like**
Add more style to your text so that certain things pop out and look good.
**Describe alternatives you've considered**
N/A
**Additional context**
N/A | 1.0 | Style text - **Is your feature request related to a problem? Please describe.**
All of the text of the page looks the same with very minimal size and style differences.
**Describe the solution you'd like**
Add more style to your text so that certain things pop out and look good.
**Describe alternatives you've considered**
N/A
**Additional context**
N/A | priority | style text is your feature request related to a problem please describe all of the text of the page looks the same with very minimal size and style differences describe the solution you d like add more style to your text so that certain things pop out and look good describe alternatives you ve considered n a additional context n a | 1 |
715,930 | 24,615,738,119 | IssuesEvent | 2022-10-15 09:34:18 | AY2223S1-CS2103-F13-2/tp | https://api.github.com/repos/AY2223S1-CS2103-F13-2/tp | opened | Improve the edit command | type.Enhancement priority.High | Once survey is changed to let a person have multiple surveys. We can change the edit command to do an add instead of having to type every survey plus the new survey. | 1.0 | Improve the edit command - Once survey is changed to let a person have multiple surveys. We can change the edit command to do an add instead of having to type every survey plus the new survey. | priority | improve the edit command once survey is changed to let a person have multiple surveys we can change the edit command to do an add instead of having to type every survey plus the new survey | 1 |
221,121 | 7,374,001,142 | IssuesEvent | 2018-03-13 18:55:03 | dojot/dojot | https://api.github.com/repos/dojot/dojot | opened | GUI - Problems with long strings in the device edit page | Priority:High bug | The device edit page doesn't handle long strings!

| 1.0 | GUI - Problems with long strings in the device edit page - The device edit page doesn't handle long strings!

| priority | gui problems with long strings in the device edit page the device edit page doesn t handle long strings | 1 |
639,210 | 20,748,991,976 | IssuesEvent | 2022-03-15 04:22:20 | tpickering223/2DGameProject | https://api.github.com/repos/tpickering223/2DGameProject | opened | Add remove function to the Inventory back-end. | enhancement High Priority | Implement the ability for the back-end to remove an item from its container (i.e. the list that actually holds the items).
| 1.0 | Add remove function to the Inventory back-end. - Implement the ability for the back-end to remove an item from its container (i.e. the list that actually holds the items).
| priority | add remove function to the inventory back end implement the ability for the back end to remove an item from its container i e the list that actually holds the items | 1 |
185,933 | 6,732,055,655 | IssuesEvent | 2017-10-18 09:57:14 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | closed | Cannot delete the try-catch block when finally is added | 0.94-pre-release Priority/High Severity/Major Type/Bug | 1. Add finally block from the source view
2. Try to delete the entire try-catch
Cannot delete the try-catch block when finally is added | 1.0 | Cannot delete the try-catch block when finally is added - 1. Add finally block from the source view
2. Try to delete the entire try-catch
Cannot delete the try-catch block when finally is added | priority | cannot delete the try catch block when finally is added add finally block from the source view try to delete the entire try catch cannot delete the try catch block when finally is added | 1 |
394,502 | 11,644,634,376 | IssuesEvent | 2020-02-29 19:46:02 | ODM2/ODM2DataSharingPortal | https://api.github.com/repos/ODM2/ODM2DataSharingPortal | opened | 500 Server Error when leaf species not selected | bug high priority leaf-pack | A user has reported getting a 500 error when submitting the leaf pack data entry form “when don’t check off a leaf species ... even if I fill out the “Other” field to indicate experimental materials.”
It sounds like the code is requiring leaf species. It should be optional because we offer the “Other” field not only for experimental materials but also for leaf species not listed (especially important for users outside of eastern US).
I’m marking this as high priority because we hope to push Leaf Pack Network traffic to Monitor My Watershed this week. Thanks! | 1.0 | 500 Server Error when leaf species not selected - A user has reported getting a 500 error when submitting the leaf pack data entry form “when don’t check off a leaf species ... even if I fill out the “Other” field to indicate experimental materials.”
It sounds like the code is requiring leaf species. It should be optional because we offer the “Other” field not only for experimental materials but also for leaf species not listed (especially important for users outside of eastern US).
I’m marking this as high priority because we hope to push Leaf Pack Network traffic to Monitor My Watershed this week. Thanks! | priority | server error when leaf species not selected a user has reported getting a error when submitting the leaf pack data entry form “when don’t check off a leaf species even if i fill out the “other” field to indicate experimental materials ” it sounds like the code is requiring leaf species it should be optional because we offer the “other” field not only for experimental materials but also for leaf species not listed especially important for users outside of eastern us i’m marking this as high priority because we hope to push leaf pack network traffic to monitor my watershed this week thanks | 1 |
133,465 | 5,203,883,228 | IssuesEvent | 2017-01-24 14:12:06 | ctsit/qipr_approver | https://api.github.com/repos/ctsit/qipr_approver | closed | Import information from Spreadsheet of Existing Projects | High Priority In progress New Feature | As a customer, I may have a spreadsheet that has a list of projects I would like to import. Please either import the data or give me an import tool.
Matt has a copy of the data. | 1.0 | Import information from Spreadsheet of Existing Projects - As a customer, I may have a spreadsheet that has a list of projects I would like to import. Please either import the data or give me an import tool.
Matt has a copy of the data. | priority | import information from spreadsheet of existing projects as a customer i may have a spreadsheet that has a list of projects i would like to import please either import the data or give me an import tool matt has a copy of the data | 1 |
376,015 | 11,137,115,478 | IssuesEvent | 2019-12-20 18:25:27 | boston-microgreens/grow-app-project | https://api.github.com/repos/boston-microgreens/grow-app-project | opened | Inventory: Seed: Fix clear all button | bug priority-high | 'Clear All' button not clearing all inputs in the seeding form. | 1.0 | Inventory: Seed: Fix clear all button - 'Clear All' button not clearing all inputs in the seeding form. | priority | inventory seed fix clear all button clear all button not clearing all inputs in the seeding form | 1 |
331,000 | 10,058,647,267 | IssuesEvent | 2019-07-22 14:19:59 | INN/umbrella-currentorg | https://api.github.com/repos/INN/umbrella-currentorg | closed | Investigate emails not sending | Estimate < 2 Hours Priority: High | All of these options are checked, but the emails don't come through. Why? 🤔

| 1.0 | Investigate emails not sending - All of these options are checked, but the emails don't come through. Why? 🤔

| priority | investigate emails not sending all of these options are checked but the emails don t come through why 🤔 | 1 |
315,509 | 9,621,494,449 | IssuesEvent | 2019-05-14 10:46:39 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | reopened | Stop the animation only when the selected layer was removed | Priority: High Timeline bug | ### Description
Now any time the user removes a layer from the map, the animation stops. This is not needed at all. You may need to stop the anymation only when the user removes the guide layer (**if any**).
### Acceptance criteria
- The animation should not stop when the user remove a layer if
- There are remaining layers to animate in the timeline. (if there are not layers with time dimension, you should stop anyway).
- The removed layer is not the guide layer
### Suggested solution
The REMOVE_NODE removes the selected layer before any epic can do some check on it.
So you may need to keep in the epic the last selected layer by intercepting SELECTED_LAYER event, and use it to check if the layer removed is the correct one.
Please take also into account that
- `undefined` is a valid value for selection (means no layer selected)
- You have also to check if there is any layer in the timeline. If not, there is no reason to continue animation and you must stop it. | 1.0 | Stop the animation only when the selected layer was removed - ### Description
Now any time the user removes a layer from the map, the animation stops. This is not needed at all. You may need to stop the anymation only when the user removes the guide layer (**if any**).
### Acceptance criteria
- The animation should not stop when the user remove a layer if
- There are remaining layers to animate in the timeline. (if there are not layers with time dimension, you should stop anyway).
- The removed layer is not the guide layer
### Suggested solution
The REMOVE_NODE removes the selected layer before any epic can do some check on it.
So you may need to keep in the epic the last selected layer by intercepting SELECTED_LAYER event, and use it to check if the layer removed is the correct one.
Please take also into account that
- `undefined` is a valid value for selection (means no layer selected)
- You have also to check if there is any layer in the timeline. If not, there is no reason to continue animation and you must stop it. | priority | stop the animation only when the selected layer was removed description now any time the user removes a layer from the map the animation stops this is not needed at all you may need to stop the anymation only when the user removes the guide layer if any acceptance criteria the animation should not stop when the user remove a layer if there are remaining layers to animate in the timeline if there are not layers with time dimension you should stop anyway the removed layer is not the guide layer suggested solution the remove node removes the selected layer before any epic can do some check on it so you may need to keep in the epic the last selected layer by intercepting selected layer event and use it to check if the layer removed is the correct one please take also into account that undefined is a valid value for selection means no layer selected you have also to check if there is any layer in the timeline if not there is no reason to continue animation and you must stop it | 1 |
647,575 | 21,111,480,585 | IssuesEvent | 2022-04-05 02:27:34 | userigorgithub/whats-cookin | https://api.github.com/repos/userigorgithub/whats-cookin | opened | Search Bar Problems | bug high priority | On all pages, the searched results that are displayed do not allow user to favorite/unfavorite or add to/remove from Want-To-Cook. | 1.0 | Search Bar Problems - On all pages, the searched results that are displayed do not allow user to favorite/unfavorite or add to/remove from Want-To-Cook. | priority | search bar problems on all pages the searched results that are displayed do not allow user to favorite unfavorite or add to remove from want to cook | 1 |
401,516 | 11,791,190,017 | IssuesEvent | 2020-03-17 20:30:20 | ChainSafe/gossamer | https://api.github.com/repos/ChainSafe/gossamer | closed | don't build blocks while syncing | Priority: 2 - High babe core | while syncing blocks initially via BlockRequest/BlockResponse, a node shouldn't be running a babe session and building blocks. | 1.0 | don't build blocks while syncing - while syncing blocks initially via BlockRequest/BlockResponse, a node shouldn't be running a babe session and building blocks. | priority | don t build blocks while syncing while syncing blocks initially via blockrequest blockresponse a node shouldn t be running a babe session and building blocks | 1 |
204,508 | 7,088,258,809 | IssuesEvent | 2018-01-11 20:50:27 | TypeStrong/atom-typescript | https://api.github.com/repos/TypeStrong/atom-typescript | closed | Expose Project API as a service | priority:high question stale | I'm working on a demo/POC to provide autocompletion for typed webcomponents (Polymer/Typescript). To realize this I transform polymer elements (html) to a typescript class so I will have a view class that can be used to typecheck and for autocompletion etc. TypeScript class will map to html element tags and attributes/values.
The autocompletion is implemented as a provider for the autocomplete-plus package and will consume the Symbols from the atom-typescript Project API. The navigateTo API's are not complete enough to provide the information as I need the parent-child relationships between symbols (class-properties-methods) to provide autocompletion for element attributes etc.
What would be the proper way to implement this? (I now added a method getNamedDeclarations(...) to the project API) and will this be provided in the future?
Another approach would be to use the language service directly, but this isn't exposed from the atom-typescript package also is it? (or is there another way to achieve this besided the Atom provider/consumer pattern, like https://discuss.atom.io/t/depending-on-other-packages/2360.
| 1.0 | Expose Project API as a service - I'm working on a demo/POC to provide autocompletion for typed webcomponents (Polymer/Typescript). To realize this I transform polymer elements (html) to a typescript class so I will have a view class that can be used to typecheck and for autocompletion etc. TypeScript class will map to html element tags and attributes/values.
The autocompletion is implemented as a provider for the autocomplete-plus package and will consume the Symbols from the atom-typescript Project API. The navigateTo API's are not complete enough to provide the information as I need the parent-child relationships between symbols (class-properties-methods) to provide autocompletion for element attributes etc.
What would be the proper way to implement this? (I now added a method getNamedDeclarations(...) to the project API) and will this be provided in the future?
Another approach would be to use the language service directly, but this isn't exposed from the atom-typescript package also is it? (or is there another way to achieve this besided the Atom provider/consumer pattern, like https://discuss.atom.io/t/depending-on-other-packages/2360.
| priority | expose project api as a service i m working on a demo poc to provide autocompletion for typed webcomponents polymer typescript to realize this i transform polymer elements html to a typescript class so i will have a view class that can be used to typecheck and for autocompletion etc typescript class will map to html element tags and attributes values the autocompletion is implemented as a provider for the autocomplete plus package and will consume the symbols from the atom typescript project api the navigateto api s are not complete enough to provide the information as i need the parent child relationships between symbols class properties methods to provide autocompletion for element attributes etc what would be the proper way to implement this i now added a method getnameddeclarations to the project api and will this be provided in the future another approach would be to use the language service directly but this isn t exposed from the atom typescript package also is it or is there another way to achieve this besided the atom provider consumer pattern like | 1 |
689,972 | 23,641,947,664 | IssuesEvent | 2022-08-25 17:56:53 | episphere/dashboard | https://api.github.com/repos/episphere/dashboard | closed | Site filter on All Participants page not working | bug High Priority | The Site filter on the 'All Participants' page isn't working for some sites. I can filter on HFHS, Chicago, and KPCO but not on HP, KPNW and a few others. We are OK without a fix for this today (Friday) but will need a fix early next week. | 1.0 | Site filter on All Participants page not working - The Site filter on the 'All Participants' page isn't working for some sites. I can filter on HFHS, Chicago, and KPCO but not on HP, KPNW and a few others. We are OK without a fix for this today (Friday) but will need a fix early next week. | priority | site filter on all participants page not working the site filter on the all participants page isn t working for some sites i can filter on hfhs chicago and kpco but not on hp kpnw and a few others we are ok without a fix for this today friday but will need a fix early next week | 1 |
814,890 | 30,527,134,274 | IssuesEvent | 2023-07-19 12:05:49 | KinsonDigital/Infrastructure | https://api.github.com/repos/KinsonDigital/Infrastructure | closed | 🚧Fix sync issue with pr template | 🐛bug high priority ♻️cicd | ### I have done the items below . . .
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Fix an issue with the pr template not being found when syncing the issue to the pull request.
This is occurring because the location where the template is being looked at is incorrect. This works fine in infrastructure but when running the syncing system in other projects that don't contain the template, this issue occurs.
This is because the pull request head branch is being used when checking if the file exists when the branch being used should be the branch of the repo where the template exists. Currently, the template exists in the **Infrastructure** repository but the script is using the **Infrastructure** repo with a head branch from the pull request where the syncing system is being executed.
This will require a new repository variable to be created with the name **PR_SYNC_TEMPLATE_BRANCH_NAME**. This will hold the branch where the template exists in the repository.
### Acceptance Criteria
**This issue is finished when:**
- [x] Sync location bug fixed
### ToDo Items
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Issue linked to the correct project.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|---------------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| Enhancement | `enhancement` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation code` |
| Product Doc Changes | `📝documentation product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|--------------------------------------------------------------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
<!--closed-by-pr:120--> | 1.0 | 🚧Fix sync issue with pr template - ### I have done the items below . . .
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Fix an issue with the pr template not being found when syncing the issue to the pull request.
This is occurring because the location where the template is being looked at is incorrect. This works fine in infrastructure but when running the syncing system in other projects that don't contain the template, this issue occurs.
This is because the pull request head branch is being used when checking if the file exists when the branch being used should be the branch of the repo where the template exists. Currently, the template exists in the **Infrastructure** repository but the script is using the **Infrastructure** repo with a head branch from the pull request where the syncing system is being executed.
This will require a new repository variable to be created with the name **PR_SYNC_TEMPLATE_BRANCH_NAME**. This will hold the branch where the template exists in the repository.
### Acceptance Criteria
**This issue is finished when:**
- [x] Sync location bug fixed
### ToDo Items
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Issue linked to the correct project.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|---------------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| Enhancement | `enhancement` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation code` |
| Product Doc Changes | `📝documentation product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|--------------------------------------------------------------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
<!--closed-by-pr:120--> | priority | 🚧fix sync issue with pr template i have done the items below i have updated the title without removing the 🚧 emoji description fix an issue with the pr template not being found when syncing the issue to the pull request this is occurring because the location where the template is being looked at is incorrect this works fine in infrastructure but when running the syncing system in other projects that don t contain the template this issue occurs this is because the pull request head branch is being used when checking if the file exists when the branch being used should be the branch of the repo where the template exists currently the template exists in the infrastructure repository but the script is using the infrastructure repo with a head branch from the pull request where the syncing system is being executed this will require a new repository variable to be created with the name pr sync template branch name this will hold the branch where the template exists in the repository acceptance criteria this issue is finished when sync location bug fixed todo items priority label added to this issue refer to the priority type labels section below change type labels added to this issue refer to the change type labels section below issue linked to the correct project issue dependencies no response related work no response additional information change type labels change type label bug fixes 🐛bug breaking changes 🧨breaking changes enhancement enhancement workflow changes workflow code doc changes 🗒️documentation code product doc changes 📝documentation product priority type labels priority type label low priority low priority medium priority medium priority high priority high priority code of conduct i agree to follow this project s code of conduct | 1 |
435,689 | 12,539,468,283 | IssuesEvent | 2020-06-05 08:39:53 | Twin-Cities-Mutual-Aid/twin-cities-aid-distribution-locations | https://api.github.com/repos/Twin-Cities-Mutual-Aid/twin-cities-aid-distribution-locations | closed | Make last-updated much more prominent on pin popups | Priority: High Type: Improvement | User feedback was given that people have driven miles to stores that turn out to not have up-to-date needs lists right now and didn't need / didn't have what was listed. We should move "last updated" to the top of the pin data so people aren't driving for no good reason. Also, possibly, style it so it's more visible -- somehow. Anything to draw attention to when the data was actually vetted.
| 1.0 | Make last-updated much more prominent on pin popups - User feedback was given that people have driven miles to stores that turn out to not have up-to-date needs lists right now and didn't need / didn't have what was listed. We should move "last updated" to the top of the pin data so people aren't driving for no good reason. Also, possibly, style it so it's more visible -- somehow. Anything to draw attention to when the data was actually vetted.
| priority | make last updated much more prominent on pin popups user feedback was given that people have driven miles to stores that turn out to not have up to date needs lists right now and didn t need didn t have what was listed we should move last updated to the top of the pin data so people aren t driving for no good reason also possibly style it so it s more visible somehow anything to draw attention to when the data was actually vetted | 1 |
523,140 | 15,173,524,390 | IssuesEvent | 2021-02-13 14:32:18 | WasiqB/coteafs-appium | https://api.github.com/repos/WasiqB/coteafs-appium | closed | Replace coteafs-config to coteafs-datasource. | effort: 2 priority: p1 severity: high type: dependencies work: obvious | Replace coteafs-config with coteafs-datasource and update the config code accordingly. | 1.0 | Replace coteafs-config to coteafs-datasource. - Replace coteafs-config with coteafs-datasource and update the config code accordingly. | priority | replace coteafs config to coteafs datasource replace coteafs config with coteafs datasource and update the config code accordingly | 1 |
564,966 | 16,746,137,475 | IssuesEvent | 2021-06-11 15:44:25 | apache/airflow | https://api.github.com/repos/apache/airflow | closed | apply_defaults doesn't run for decorated task | AIP-31 kind:bug priority:high | **Apache Airflow version**: 2.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS
- **Kernel** (e.g. `uname -a`): Linux 4.4.0-18362-Microsoft #1049-Microsoft Thu Aug 14 12:01:00 PST 2020 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
The apply_defaults doesn't work for decorated tasks after upgrading to Airflow 2.1.
**What you expected to happen**:
Missing task arguments to be filled by values from DAG default_args. Airflow 2.1 expected to apply by default (https://github.com/apache/airflow/pull/15667) but doesn't work either with or without apply_defaults decorator being used.
**How to reproduce it**:
Sample DAG attached as txt file.
[task_decorator_test.txt](https://github.com/apache/airflow/files/6540740/task_decorator_test.txt)
If run in Airflow 2.0.2, this prints in the logs the conn_id value showing that it was picked up from the DAG default_args.

If run in Airflow 2.1, this causes TypeError for missing required argument in the scheduler. If a default value is given to the argument, (making it optional), the task can run, but the apply_defaults to use the DAG's default_args value doesn't run.

**Anything else we need to know**: issue happens with Airflow 2.1 | 1.0 | apply_defaults doesn't run for decorated task - **Apache Airflow version**: 2.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS
- **Kernel** (e.g. `uname -a`): Linux 4.4.0-18362-Microsoft #1049-Microsoft Thu Aug 14 12:01:00 PST 2020 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
The apply_defaults doesn't work for decorated tasks after upgrading to Airflow 2.1.
**What you expected to happen**:
Missing task arguments to be filled by values from DAG default_args. Airflow 2.1 expected to apply by default (https://github.com/apache/airflow/pull/15667) but doesn't work either with or without apply_defaults decorator being used.
**How to reproduce it**:
Sample DAG attached as txt file.
[task_decorator_test.txt](https://github.com/apache/airflow/files/6540740/task_decorator_test.txt)
If run in Airflow 2.0.2, this prints in the logs the conn_id value showing that it was picked up from the DAG default_args.

If run in Airflow 2.1, this causes TypeError for missing required argument in the scheduler. If a default value is given to the argument, (making it optional), the task can run, but the apply_defaults to use the DAG's default_args value doesn't run.

**Anything else we need to know**: issue happens with Airflow 2.1 | priority | apply defaults doesn t run for decorated task apache airflow version kubernetes version if you are using kubernetes use kubectl version n a environment cloud provider or hardware configuration os e g from etc os release ubuntu lts kernel e g uname a linux microsoft microsoft thu aug pst gnu linux install tools others what happened the apply defaults doesn t work for decorated tasks after upgrading to airflow what you expected to happen missing task arguments to be filled by values from dag default args airflow expected to apply by default but doesn t work either with or without apply defaults decorator being used how to reproduce it sample dag attached as txt file if run in airflow this prints in the logs the conn id value showing that it was picked up from the dag default args if run in airflow this causes typeerror for missing required argument in the scheduler if a default value is given to the argument making it optional the task can run but the apply defaults to use the dag s default args value doesn t run anything else we need to know issue happens with airflow | 1 |
97,489 | 3,994,273,264 | IssuesEvent | 2016-05-10 11:43:42 | tomleibo/distributedSystems | https://api.github.com/repos/tomleibo/distributedSystems | closed | BUG: when starting 2 locals in parallel, they start 2 managers | bug High priority | When starting 2 locals in parallel, they start 2 managers. is there any way to prevent this?
@tomleibo what do you think? | 1.0 | BUG: when starting 2 locals in parallel, they start 2 managers - When starting 2 locals in parallel, they start 2 managers. is there any way to prevent this?
@tomleibo what do you think? | priority | bug when starting locals in parallel they start managers when starting locals in parallel they start managers is there any way to prevent this tomleibo what do you think | 1 |
592,704 | 17,928,340,195 | IssuesEvent | 2021-09-10 05:02:00 | dataware-tools/dataware-tools | https://api.github.com/repos/dataware-tools/dataware-tools | closed | Check permissions in api-file-provider | wg/web-app priority/high | ## Purpose
Feature request
## Description
api-file-provider で api-permission-manager への権限チェックを行う
## TODOs
- [x] 権限チェックのクライアントを共通化
- [x] meta-storeの方で共通化したクライアントを使う
- [x] permission-manager にfile関連のアクション追加
- [x] file-providerに権限チェック追加
- [x] 実装ミスを修正
- [x] 動作確認できたら各APIをリリース
- [x] protocols変更
- [x] app-data-browser-next変更
- https://github.com/dataware-tools/app-data-browser-next/blob/4c7140ca86922f4d57c38651deac17dfc9e2b29a/src/components/organisms/FileList.tsx#L145 | 1.0 | Check permissions in api-file-provider - ## Purpose
Feature request
## Description
api-file-provider で api-permission-manager への権限チェックを行う
## TODOs
- [x] 権限チェックのクライアントを共通化
- [x] meta-storeの方で共通化したクライアントを使う
- [x] permission-manager にfile関連のアクション追加
- [x] file-providerに権限チェック追加
- [x] 実装ミスを修正
- [x] 動作確認できたら各APIをリリース
- [x] protocols変更
- [x] app-data-browser-next変更
- https://github.com/dataware-tools/app-data-browser-next/blob/4c7140ca86922f4d57c38651deac17dfc9e2b29a/src/components/organisms/FileList.tsx#L145 | priority | check permissions in api file provider purpose feature request description api file provider で api permission manager への権限チェックを行う todos 権限チェックのクライアントを共通化 meta storeの方で共通化したクライアントを使う permission manager にfile関連のアクション追加 file providerに権限チェック追加 実装ミスを修正 動作確認できたら各apiをリリース protocols変更 app data browser next変更 | 1 |
754,991 | 26,411,836,379 | IssuesEvent | 2023-01-13 12:56:15 | LiskHQ/lisk-desktop | https://api.github.com/repos/LiskHQ/lisk-desktop | closed | Remove unlock feature from wallet | type: bug priority: high domain: pos | ### Expected behavior
Remove the ability to unlock from wallet. Refer the design and update the table accordingly https://www.figma.com/file/KcrDpvWEKQhdGwNd4CZ5NY/Desktop-Prototype?node-id=38%3A13142&t=prTQCIyJ1X49773R-4
Unlock should only exists under pos domain
### Actual behavior
<img width="1658" alt="Screenshot 2022-11-23 at 12 38 03 PM" src="https://user-images.githubusercontent.com/6449871/203489388-6d5b84fa-974a-4b4c-841b-a9c6807f4d4a.png">
### Steps to reproduce
- Navigate to Wallets tab
- Click on `all tokens`
### Which version(s) does this affect? (Environment, OS, etc...)
v3 | 1.0 | Remove unlock feature from wallet - ### Expected behavior
Remove the ability to unlock from wallet. Refer the design and update the table accordingly https://www.figma.com/file/KcrDpvWEKQhdGwNd4CZ5NY/Desktop-Prototype?node-id=38%3A13142&t=prTQCIyJ1X49773R-4
Unlock should only exists under pos domain
### Actual behavior
<img width="1658" alt="Screenshot 2022-11-23 at 12 38 03 PM" src="https://user-images.githubusercontent.com/6449871/203489388-6d5b84fa-974a-4b4c-841b-a9c6807f4d4a.png">
### Steps to reproduce
- Navigate to Wallets tab
- Click on `all tokens`
### Which version(s) does this affect? (Environment, OS, etc...)
v3 | priority | remove unlock feature from wallet expected behavior remove the ability to unlock from wallet refer the design and update the table accordingly unlock should only exists under pos domain actual behavior img width alt screenshot at pm src steps to reproduce navigate to wallets tab click on all tokens which version s does this affect environment os etc | 1 |
801,819 | 28,503,848,010 | IssuesEvent | 2023-04-18 19:36:29 | status-im/status-mobile | https://api.github.com/repos/status-im/status-mobile | closed | Unable to build app on iOS in anyway | high-priority developer-xp high-severity | I am not able to build the app on iOS.
`make run-ios` fails with error: `error Failed to build iOS project. We ran "xcodebuild" command but it exited with error code 65. To debug build logs further, consider building your app with Xcode.app, by opening StatusIm.xcworkspace.`
Building from Xcode on Physical device fails with error: `PhaseScriptExecution failed with non-zero exit code`
Building from Xcode on Simulator builds, but the app never goes past the splash screen.
Below is logs file:
[react-native-xcode.log](https://github.com/status-im/status-mobile/files/11254752/react-native-xcode.log)
| 1.0 | Unable to build app on iOS in anyway - I am not able to build the app on iOS.
`make run-ios` fails with error: `error Failed to build iOS project. We ran "xcodebuild" command but it exited with error code 65. To debug build logs further, consider building your app with Xcode.app, by opening StatusIm.xcworkspace.`
Building from Xcode on Physical device fails with error: `PhaseScriptExecution failed with non-zero exit code`
Building from Xcode on Simulator builds, but the app never goes past the splash screen.
Below is logs file:
[react-native-xcode.log](https://github.com/status-im/status-mobile/files/11254752/react-native-xcode.log)
| priority | unable to build app on ios in anyway i am not able to build the app on ios make run ios fails with error error failed to build ios project we ran xcodebuild command but it exited with error code to debug build logs further consider building your app with xcode app by opening statusim xcworkspace building from xcode on physical device fails with error phasescriptexecution failed with non zero exit code building from xcode on simulator builds but the app never goes past the splash screen below is logs file | 1 |
639,828 | 20,766,957,532 | IssuesEvent | 2022-03-15 21:44:28 | monarch-initiative/mondo | https://api.github.com/repos/monarch-initiative/mondo | closed | revise capitalization of clingen preferred AP | high priority ClinGen | clingen would like it to be ClinGen preferred
are there any issues with having capitals in APs @matentzn ? | 1.0 | revise capitalization of clingen preferred AP - clingen would like it to be ClinGen preferred
are there any issues with having capitals in APs @matentzn ? | priority | revise capitalization of clingen preferred ap clingen would like it to be clingen preferred are there any issues with having capitals in aps matentzn | 1 |
630,800 | 20,118,123,034 | IssuesEvent | 2022-02-07 21:55:57 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | no history shown when leaving and rejoining public chat | bug Chat priority 1: high | ### Description
1. join some public chat
2. receive some messages
3. leave this chat
4. join the same chat again
```
INF 2022-01-27 16:11:28.325-05:00 history request started topics="mailservers-service" tid=2789208 file=service.nim:65 requestId=58363033-d2f6-4b38-b443-f3a91d809d9d numBatches=1
INF 2022-01-27 16:12:06.188-05:00 history request failed topics="mailservers-service" tid=2789208 file=service.nim:77 requestId=3e8f31a1-dab1-4489-b155-d9cff4f3d0e8 errorMessage="context deadline exceeded"
INF 2022-01-27 16:12:07.768-05:00 history request failed topics="mailservers-service" tid=2789208 file=service.nim:77 requestId=f036da60-dbcc-4853-b04b-0c65629f1ed6 errorMessage="context deadline exceeded"
INF 2022-01-27 16:12:13.142-05:00 history request failed topics="mailservers-service" tid=2789208 file=service.nim:77 requestId=58363033-d2f6-4b38-b443-f3a91d809d9d errorMessage="context deadline exceeded"
WRN 2022-01-27 16:13:04.051-05:00 Error decoding signal topics="signals-manager" tid=2789208 file=signals_manager.nim:43 err="Unknown signal received: backup.performed"
```

| 1.0 | no history shown when leaving and rejoining public chat - ### Description
1. join some public chat
2. receive some messages
3. leave this chat
4. join the same chat again
```
INF 2022-01-27 16:11:28.325-05:00 history request started topics="mailservers-service" tid=2789208 file=service.nim:65 requestId=58363033-d2f6-4b38-b443-f3a91d809d9d numBatches=1
INF 2022-01-27 16:12:06.188-05:00 history request failed topics="mailservers-service" tid=2789208 file=service.nim:77 requestId=3e8f31a1-dab1-4489-b155-d9cff4f3d0e8 errorMessage="context deadline exceeded"
INF 2022-01-27 16:12:07.768-05:00 history request failed topics="mailservers-service" tid=2789208 file=service.nim:77 requestId=f036da60-dbcc-4853-b04b-0c65629f1ed6 errorMessage="context deadline exceeded"
INF 2022-01-27 16:12:13.142-05:00 history request failed topics="mailservers-service" tid=2789208 file=service.nim:77 requestId=58363033-d2f6-4b38-b443-f3a91d809d9d errorMessage="context deadline exceeded"
WRN 2022-01-27 16:13:04.051-05:00 Error decoding signal topics="signals-manager" tid=2789208 file=signals_manager.nim:43 err="Unknown signal received: backup.performed"
```

| priority | no history shown when leaving and rejoining public chat description join some public chat receive some messages leave this chat join the same chat again inf history request started topics mailservers service tid file service nim requestid numbatches inf history request failed topics mailservers service tid file service nim requestid errormessage context deadline exceeded inf history request failed topics mailservers service tid file service nim requestid dbcc errormessage context deadline exceeded inf history request failed topics mailservers service tid file service nim requestid errormessage context deadline exceeded wrn error decoding signal topics signals manager tid file signals manager nim err unknown signal received backup performed | 1 |
479,725 | 13,805,107,836 | IssuesEvent | 2020-10-11 12:14:40 | OpenSRP/opensrp-client-reveal | https://api.github.com/repos/OpenSRP/opensrp-client-reveal | opened | No Data Transferred When OA selected during P2P Syn | Priority: High | - [ ] On the Zambia APK version 5.3.4 when doing a P2P sync, the app provides an option to select the OA from which data is to be transferred between the syncing devices. However, the syn process is marked as successful but hsows that zero(0) files were transferred. A check on the receiving phone shows that no data s transferred. The expected behaviour is that data should be transferred based on the selected OA. It was however noted that when the parent of the OA is selected during the P2P sync, data is successfully transferred. | 1.0 | No Data Transferred When OA selected during P2P Syn - - [ ] On the Zambia APK version 5.3.4 when doing a P2P sync, the app provides an option to select the OA from which data is to be transferred between the syncing devices. However, the syn process is marked as successful but hsows that zero(0) files were transferred. A check on the receiving phone shows that no data s transferred. The expected behaviour is that data should be transferred based on the selected OA. It was however noted that when the parent of the OA is selected during the P2P sync, data is successfully transferred. | priority | no data transferred when oa selected during syn on the zambia apk version when doing a sync the app provides an option to select the oa from which data is to be transferred between the syncing devices however the syn process is marked as successful but hsows that zero files were transferred a check on the receiving phone shows that no data s transferred the expected behaviour is that data should be transferred based on the selected oa it was however noted that when the parent of the oa is selected during the sync data is successfully transferred | 1 |
370,387 | 10,931,384,830 | IssuesEvent | 2019-11-23 09:43:40 | bounswe/bounswe2019group8 | https://api.github.com/repos/bounswe/bounswe2019group8 | closed | Develop Forex Screen | Effort: High Mobile Platform: Mobile Priority: High Status: In Progress Type: Feature | **Actions:**
1. Create forex screen to view forex items.
1. Create forex items.
1. Connect with backend.
**Notes:**
- [x] Create forex screen to view forex items.
- [x] Create forex items.
- [x] Connect with backend.
**Deadline:** 25.10.2019 - 23.59 | 1.0 | Develop Forex Screen - **Actions:**
1. Create forex screen to view forex items.
1. Create forex items.
1. Connect with backend.
**Notes:**
- [x] Create forex screen to view forex items.
- [x] Create forex items.
- [x] Connect with backend.
**Deadline:** 25.10.2019 - 23.59 | priority | develop forex screen actions create forex screen to view forex items create forex items connect with backend notes create forex screen to view forex items create forex items connect with backend deadline | 1 |
232,304 | 7,657,500,270 | IssuesEvent | 2018-05-10 19:53:09 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Server Browser: My Servers won't refresh correctly | High Priority | > When server status changes it does not reflect on the servers in "My Servers", if you refresh to change status nothing happens except that "Recent" will update. | 1.0 | Server Browser: My Servers won't refresh correctly - > When server status changes it does not reflect on the servers in "My Servers", if you refresh to change status nothing happens except that "Recent" will update. | priority | server browser my servers won t refresh correctly when server status changes it does not reflect on the servers in my servers if you refresh to change status nothing happens except that recent will update | 1 |
608,343 | 18,836,197,444 | IssuesEvent | 2021-11-11 01:23:21 | biocodellc/localcontexts_db | https://api.github.com/repos/biocodellc/localcontexts_db | closed | onboarding modal not popping up when its supposed to | bug high priority | On login at registration / email verification `last_login` gets updated, making the modal not pop up.
Possible solution:
Add `onboarding_on` boolean field to `Profile` model, set to `True` at user activation (or default) and then `False` on completion of onboarding, when user clicks 'Finish'. | 1.0 | onboarding modal not popping up when its supposed to - On login at registration / email verification `last_login` gets updated, making the modal not pop up.
Possible solution:
Add `onboarding_on` boolean field to `Profile` model, set to `True` at user activation (or default) and then `False` on completion of onboarding, when user clicks 'Finish'. | priority | onboarding modal not popping up when its supposed to on login at registration email verification last login gets updated making the modal not pop up possible solution add onboarding on boolean field to profile model set to true at user activation or default and then false on completion of onboarding when user clicks finish | 1 |
831,454 | 32,049,666,999 | IssuesEvent | 2023-09-23 12:04:58 | cesium/atomic | https://api.github.com/repos/cesium/atomic | closed | Create and fix form components | help wanted enhancement frontend priority:high | The following forms listed below are in need of doing, I will thread the permissions of who can access them.
Organizations:
- [ ] new -> Application admin
- [ ] edit -> Organization owner and admin or Application admin
Board:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
Partners:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
Departments:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
Activities:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
In case you have any doubts, feel free to ask it :pray: | 1.0 | Create and fix form components - The following forms listed below are in need of doing, I will thread the permissions of who can access them.
Organizations:
- [ ] new -> Application admin
- [ ] edit -> Organization owner and admin or Application admin
Board:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
Partners:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
Departments:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
Activities:
- [ ] new -> Organization owner and admin
- [ ] edit -> Organization owner and admin
In case you have any doubts, feel free to ask it :pray: | priority | create and fix form components the following forms listed below are in need of doing i will thread the permissions of who can access them organizations new application admin edit organization owner and admin or application admin board new organization owner and admin edit organization owner and admin partners new organization owner and admin edit organization owner and admin departments new organization owner and admin edit organization owner and admin activities new organization owner and admin edit organization owner and admin in case you have any doubts feel free to ask it pray | 1 |
799,218 | 28,302,281,370 | IssuesEvent | 2023-04-10 07:27:17 | KinsonDigital/Velaptor | https://api.github.com/repos/KinsonDigital/Velaptor | closed | 🚧Create image loader | ✨new feature high priority preview | ### Complete The Item Below
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Create an image loader class.
**Purpose:**
To give the users the ability to load images manually during runtime.
**Use Case:**
The loader will return a struct of type `ImageData`. This will give the users the ability to manually manipulate the image during runtime. They can then use this `ImageData` to create a new `Texture` object used for rendering. This will basically give the user the ability to manipulate the image before rendering it.
**`ImageLoader` Features:**
- Loads images by pointing to any file on disk.
- Loads images using content relative paths
> **Note** This means if a full path is not used, then it is assumed that it is a path to the _**Content/Graphics**_ content directory. If the path is _**NOT**_ a fully qualified path then an extension is not required due to the path representing a path to the content directory.
**`ImageData` Features:**
Add the following features to the `ImageData` struct
- Add a method to the struct to flip the image horizontally
- Add a method to the struct to flip the image vertically
- Add `bool` property to the struct named `IsFlippedHorizontally`
- Add `bool` property to the struct named `IsFlippedVertically`
### Acceptance Criteria
- [x] `ImageLoader` class created.
- [x] Loads images by pointing to any file on disk.
- [x] Loads images using relative content paths
- [x] Features added to the `ImageData` struct
- [x] Method added to the struct to flip the image horizontally
- [x] Method added to the struct to flip the image vertically
- [x] `bool` property added to the struct named `IsFlippedHorizontally`
- [x] `bool` property added to the struct named `IsFlippedVertically`
- [x] Create an issue to create documentation for the website
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [x] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| New Feature | `✨new feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation/code` |
| Product Doc Changes | `📝documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct. | 1.0 | 🚧Create image loader - ### Complete The Item Below
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Create an image loader class.
**Purpose:**
To give the users the ability to load images manually during runtime.
**Use Case:**
The loader will return a struct of type `ImageData`. This will give the users the ability to manually manipulate the image during runtime. They can then use this `ImageData` to create a new `Texture` object used for rendering. This will basically give the user the ability to manipulate the image before rendering it.
**`ImageLoader` Features:**
- Loads images by pointing to any file on disk.
- Loads images using content relative paths
> **Note** This means if a full path is not used, then it is assumed that it is a path to the _**Content/Graphics**_ content directory. If the path is _**NOT**_ a fully qualified path then an extension is not required due to the path representing a path to the content directory.
**`ImageData` Features:**
Add the following features to the `ImageData` struct
- Add a method to the struct to flip the image horizontally
- Add a method to the struct to flip the image vertically
- Add `bool` property to the struct named `IsFlippedHorizontally`
- Add `bool` property to the struct named `IsFlippedVertically`
### Acceptance Criteria
- [x] `ImageLoader` class created.
- [x] Loads images by pointing to any file on disk.
- [x] Loads images using relative content paths
- [x] Features added to the `ImageData` struct
- [x] Method added to the struct to flip the image horizontally
- [x] Method added to the struct to flip the image vertically
- [x] `bool` property added to the struct named `IsFlippedHorizontally`
- [x] `bool` property added to the struct named `IsFlippedVertically`
- [x] Create an issue to create documentation for the website
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [x] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| New Feature | `✨new feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation/code` |
| Product Doc Changes | `📝documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct. | priority | 🚧create image loader complete the item below i have updated the title without removing the 🚧 emoji description create an image loader class purpose to give the users the ability to load images manually during runtime use case the loader will return a struct of type imagedata this will give the users the ability to manually manipulate the image during runtime they can then use this imagedata to create a new texture object used for rendering this will basically give the user the ability to manipulate the image before rendering it imageloader features loads images by pointing to any file on disk loads images using content relative paths note this means if a full path is not used then it is assumed that it is a path to the content graphics content directory if the path is not a fully qualified path then an extension is not required due to the path representing a path to the content directory imagedata features add the following features to the imagedata struct add a method to the struct to flip the image horizontally add a method to the struct to flip the image vertically add bool property to the struct named isflippedhorizontally add bool property to the struct named isflippedvertically acceptance criteria imageloader class created loads images by pointing to any file on disk loads images using relative content paths features added to the imagedata struct method added to the struct to flip the image horizontally method added to the struct to flip the image vertically bool property added to the struct named isflippedhorizontally bool property added to the struct named isflippedvertically create an issue to create documentation for the website todo items change type labels added to this issue refer to the change type labels section below priority label added to this issue refer to the priority type labels section below issue linked to the correct project if applicable issue linked to the correct milestone if applicable draft pull request created and linked to this issue only required with code changes issue dependencies no response related work no response additional information change type labels change type label bug fixes 🐛bug breaking changes 🧨breaking changes new feature ✨new feature workflow changes workflow code doc changes 🗒️documentation code product doc changes 📝documentation product priority type labels priority type label low priority low priority medium priority medium priority high priority high priority code of conduct i agree to follow this project s code of conduct | 1 |
134,913 | 5,239,721,686 | IssuesEvent | 2017-01-31 10:44:03 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | closed | Unable to run application due missing asset.notNull method | priority: high type: bug | Bug report
Application built with spring boot version 2.0.0-BUILD-SNAPSHOT cannot start due missing Assert.notNull(single argument) method
```
java.lang.NoSuchMethodError: org.springframework.util.Assert.notNull(Ljava/lang/Object;)V
at org.springframework.boot.bind.PropertiesConfigurationFactory.<init>(PropertiesConfigurationFactory.java:92) ~[spring-boot-2.0.0.BUILD-20170130.202942-338.jar:2.0.0.BUILD-SNAPSHOT]
```
| 1.0 | Unable to run application due missing asset.notNull method - Bug report
Application built with spring boot version 2.0.0-BUILD-SNAPSHOT cannot start due missing Assert.notNull(single argument) method
```
java.lang.NoSuchMethodError: org.springframework.util.Assert.notNull(Ljava/lang/Object;)V
at org.springframework.boot.bind.PropertiesConfigurationFactory.<init>(PropertiesConfigurationFactory.java:92) ~[spring-boot-2.0.0.BUILD-20170130.202942-338.jar:2.0.0.BUILD-SNAPSHOT]
```
| priority | unable to run application due missing asset notnull method bug report application built with spring boot version build snapshot cannot start due missing assert notnull single argument method java lang nosuchmethoderror org springframework util assert notnull ljava lang object v at org springframework boot bind propertiesconfigurationfactory propertiesconfigurationfactory java | 1 |
739,875 | 25,726,121,430 | IssuesEvent | 2022-12-07 16:46:53 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | 12.1 LDAP and Active Directory Authentication source -Blocking Issue | Type: Bug Priority: High | **Describe the bug**
Within the certificates tab of the authentication source Client Certificate File and Client Key File has become mandatory. This is the case for the following authentication sources (which appear to use the same template).
Active Directory
LDAP
The side effect is when creating either a new source that does not need to use certificates or needing to modify an existing source the create button or save button is disabled.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Configuration | Policies and Access Control | Authentication Sources
2. Click on "New Internal Source"
3. Select LDAP
4. See error with red 2 next to certificates
**Screenshots**

**Expected behavior**
Able to create or save the authentication source without the need for applying a certificate.
**Desktop (please complete the following information):**
- OS: Windows
- Browser Edge and Chrome
- Version 107
**Additional context**
I also noticed that there may be a second bug in this area. If you add a fake crt file to get passed the saving it gets uploaded but the path to it does not get displayed when reentering the certificates tab.
| 1.0 | 12.1 LDAP and Active Directory Authentication source -Blocking Issue - **Describe the bug**
Within the certificates tab of the authentication source Client Certificate File and Client Key File has become mandatory. This is the case for the following authentication sources (which appear to use the same template).
Active Directory
LDAP
The side effect is when creating either a new source that does not need to use certificates or needing to modify an existing source the create button or save button is disabled.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Configuration | Policies and Access Control | Authentication Sources
2. Click on "New Internal Source"
3. Select LDAP
4. See error with red 2 next to certificates
**Screenshots**

**Expected behavior**
Able to create or save the authentication source without the need for applying a certificate.
**Desktop (please complete the following information):**
- OS: Windows
- Browser Edge and Chrome
- Version 107
**Additional context**
I also noticed that there may be a second bug in this area. If you add a fake crt file to get passed the saving it gets uploaded but the path to it does not get displayed when reentering the certificates tab.
| priority | ldap and active directory authentication source blocking issue describe the bug within the certificates tab of the authentication source client certificate file and client key file has become mandatory this is the case for the following authentication sources which appear to use the same template active directory ldap the side effect is when creating either a new source that does not need to use certificates or needing to modify an existing source the create button or save button is disabled to reproduce steps to reproduce the behavior go to configuration policies and access control authentication sources click on new internal source select ldap see error with red next to certificates screenshots expected behavior able to create or save the authentication source without the need for applying a certificate desktop please complete the following information os windows browser edge and chrome version additional context i also noticed that there may be a second bug in this area if you add a fake crt file to get passed the saving it gets uploaded but the path to it does not get displayed when reentering the certificates tab | 1 |
243,047 | 7,852,671,040 | IssuesEvent | 2018-06-20 15:10:48 | ansible/galaxy | https://api.github.com/repos/ansible/galaxy | closed | I can kick off the import process for other users | area/backend area/frontend priority/high status/new type/bug | <!---
Verify first that your issue/request is not already reported on GitHub.
-->
## Bug Report
##### SUMMARY
If log in and go to "My Imports" I can remove the filter for my username and see everyone's content. This allows to click the "Restart Import" button on other people's projects. Is this a bug or the expected behavior?
| 1.0 | I can kick off the import process for other users - <!---
Verify first that your issue/request is not already reported on GitHub.
-->
## Bug Report
##### SUMMARY
If log in and go to "My Imports" I can remove the filter for my username and see everyone's content. This allows to click the "Restart Import" button on other people's projects. Is this a bug or the expected behavior?
| priority | i can kick off the import process for other users verify first that your issue request is not already reported on github bug report summary if log in and go to my imports i can remove the filter for my username and see everyone s content this allows to click the restart import button on other people s projects is this a bug or the expected behavior | 1 |
172,879 | 6,517,283,204 | IssuesEvent | 2017-08-27 21:10:42 | kgersen/Allegiance | https://api.github.com/repos/kgersen/Allegiance | opened | find a way to be in sync or merge official changes | High priority | Things happening here: https://github.com/FreeAllegiance/Allegiance
but the Steam Integration could be problematic if it's hardcoded. | 1.0 | find a way to be in sync or merge official changes - Things happening here: https://github.com/FreeAllegiance/Allegiance
but the Steam Integration could be problematic if it's hardcoded. | priority | find a way to be in sync or merge official changes things happening here but the steam integration could be problematic if it s hardcoded | 1 |
348,467 | 10,442,785,788 | IssuesEvent | 2019-09-18 13:44:13 | Buyen3/iwvg-ecosystem-ying-bao | https://api.github.com/repos/Buyen3/iwvg-ecosystem-ying-bao | opened | Heroku | Points : 0.5 Priority : high Type : structure | Desplegar en **Heroku**. Incluir **Badge** en README con link a la página de Swagger-ui.html. | 1.0 | Heroku - Desplegar en **Heroku**. Incluir **Badge** en README con link a la página de Swagger-ui.html. | priority | heroku desplegar en heroku incluir badge en readme con link a la página de swagger ui html | 1 |
217,004 | 7,313,812,318 | IssuesEvent | 2018-03-01 03:13:16 | HAS-CRM/IssueTracker | https://api.github.com/repos/HAS-CRM/IssueTracker | closed | EBM Integration: HAS Vietnam - Irene | Priority.High Status.Ongoing Status.PendingInfo Type.ChangeRequest Type.MajorChanges | Background:
- HAS Vietnam will like to include EBM integration into CRM
- HAS Vietnam does not use KYPS to validate customer
- Remove checks for Customer Code when submitting Quotation to ISS
- Disable Send to ISS/KYPS ribbon at Account entity
- Sales personnel should be able to modify customer code manually since there's no sync from EBM
- When it is approve by Hi-Front
- When the customer code at CRM does not match with EBM
- There are two business unit for Vietnam Hanoi and Ho Chi Minh
| 1.0 | EBM Integration: HAS Vietnam - Irene - Background:
- HAS Vietnam will like to include EBM integration into CRM
- HAS Vietnam does not use KYPS to validate customer
- Remove checks for Customer Code when submitting Quotation to ISS
- Disable Send to ISS/KYPS ribbon at Account entity
- Sales personnel should be able to modify customer code manually since there's no sync from EBM
- When it is approve by Hi-Front
- When the customer code at CRM does not match with EBM
- There are two business unit for Vietnam Hanoi and Ho Chi Minh
| priority | ebm integration has vietnam irene background has vietnam will like to include ebm integration into crm has vietnam does not use kyps to validate customer remove checks for customer code when submitting quotation to iss disable send to iss kyps ribbon at account entity sales personnel should be able to modify customer code manually since there s no sync from ebm when it is approve by hi front when the customer code at crm does not match with ebm there are two business unit for vietnam hanoi and ho chi minh | 1 |
248,542 | 7,933,684,706 | IssuesEvent | 2018-07-08 09:51:19 | MaxInertia/VRDataVisualization | https://api.github.com/repos/MaxInertia/VRDataVisualization | closed | Report Individual Characteristics | High Priority | Enumerate information on the selected points in the form of their values for each of the selected fields (corresponding to each axis). | 1.0 | Report Individual Characteristics - Enumerate information on the selected points in the form of their values for each of the selected fields (corresponding to each axis). | priority | report individual characteristics enumerate information on the selected points in the form of their values for each of the selected fields corresponding to each axis | 1 |
784,868 | 27,587,583,807 | IssuesEvent | 2023-03-08 21:06:28 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | [Moved from Discord] Crucible of Carnage: The blooodeye Bruiser! - Exploit handins | Quest - Cataclysm (80+) Priority: High Status: Confirmed Bug Report from Discord Exploit | Quest can be handed in 3 times after completion



How to reproduce - Complete quest by normal means. potentially duplicate entries in the DB

| 1.0 | [Moved from Discord] Crucible of Carnage: The blooodeye Bruiser! - Exploit handins - Quest can be handed in 3 times after completion



How to reproduce - Complete quest by normal means. potentially duplicate entries in the DB

| priority | crucible of carnage the blooodeye bruiser exploit handins quest can be handed in times after completion how to reproduce complete quest by normal means potentially duplicate entries in the db | 1 |
346,887 | 10,421,298,855 | IssuesEvent | 2019-09-16 05:31:56 | ahmedkaludi/pwa-for-wp | https://api.github.com/repos/ahmedkaludi/pwa-for-wp | closed | Need to serve required files from upload directory | High Priority bug | Ref: https://secure.helpscout.net/conversation/901825927/73575?folderId=2770545
Issue: Users are not allowed to write files in the root folder, In this case, we need to create FILE in the upload folder and serve it to users(manifest.json,service-worker.js, etc.), User is not getting notice of download files.
Suggestion: We can write rewrite rules to serve files from the upload folder. This rewrite rule is in a way of Wordpress. | 1.0 | Need to serve required files from upload directory - Ref: https://secure.helpscout.net/conversation/901825927/73575?folderId=2770545
Issue: Users are not allowed to write files in the root folder, In this case, we need to create FILE in the upload folder and serve it to users(manifest.json,service-worker.js, etc.), User is not getting notice of download files.
Suggestion: We can write rewrite rules to serve files from the upload folder. This rewrite rule is in a way of Wordpress. | priority | need to serve required files from upload directory ref issue users are not allowed to write files in the root folder in this case we need to create file in the upload folder and serve it to users manifest json service worker js etc user is not getting notice of download files suggestion we can write rewrite rules to serve files from the upload folder this rewrite rule is in a way of wordpress | 1 |
264,357 | 8,308,826,874 | IssuesEvent | 2018-09-24 00:57:56 | Zicerite/Gavania-Project | https://api.github.com/repos/Zicerite/Gavania-Project | closed | Stone, Granite, and Diorite Weapons | High Priority enhancement | 18, 22, and 26 damage respectively.
level 32, 38, and 42 respectively. | 1.0 | Stone, Granite, and Diorite Weapons - 18, 22, and 26 damage respectively.
level 32, 38, and 42 respectively. | priority | stone granite and diorite weapons and damage respectively level and respectively | 1 |
619,212 | 19,519,251,754 | IssuesEvent | 2021-12-29 15:26:53 | localstack/localstack | https://api.github.com/repos/localstack/localstack | closed | bug: elasticache Redis (cluster mode enabled) not created in localstac sucessfully | bug priority-high needs-triaging aws:elasticache | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
i am trying to create a elasticache cluster wuith redis (cluster mode enabled) . it basically should create a cluster with n primary nodes and each node having m replicas.
aws elasticache create-replication-group was tried and it ran successfully . but replacing aws with aws local was not a success
i see the output as
```
{
aws_1 | "ReplicationGroup": {
aws_1 | "ReplicationGroupId": "penta-redis",
aws_1 | "Description": "Demo cluster with replicas",
aws_1 | "Status": "available",
aws_1 | "CacheNodeType": "cache.t3.micro",
aws_1 | "ARN": "arn:aws:elasticache:us-east-1:000000000000:replicationgroup:penta-redis"
aws_1 | }
aws_1 | }
```
but calling on redis-cli dosent seem to work .
no success message is deliverd or port is informed
awslocal elasticache create-cache-cluster . this works but it creates an elasticache (cluster mode disabled ) with a single priamry node and multiple replicas
### Expected Behavior
awslocal elasticache create-replication-group should create a elasticache (cluster mode enabled) with specifed nodes and replicas
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
`awslocal elasticache create-replication-group --replication-group-id demo-redis --replication-group-description "Demo cluster with replicas" --num-node-groups 3 --replicas-per-node-group 2 --cache-node-type cache.t3.micro --cache-parameter-group default.redis6.x.cluster.on --engine redis --engine-version 6.x --security-group-ids xxxxxx --cache-subnet-group-name xxxxxx`
compose file
```
aws:
build: ./localstack
ports:
- "443:443"
- "4566:4566"
- "4571:4571"
environment:
SERVICES: 's3,sqs,sns,elasticache'
DEBUG: '1'
LOCALSTACK_API_KEY: xxxxxxxx
```
### Environment
```markdown
- OS: ubuntu latest
- LocalStack: latest
```
### Anything else?
_No response_ | 1.0 | bug: elasticache Redis (cluster mode enabled) not created in localstac sucessfully - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
i am trying to create a elasticache cluster wuith redis (cluster mode enabled) . it basically should create a cluster with n primary nodes and each node having m replicas.
aws elasticache create-replication-group was tried and it ran successfully . but replacing aws with aws local was not a success
i see the output as
```
{
aws_1 | "ReplicationGroup": {
aws_1 | "ReplicationGroupId": "penta-redis",
aws_1 | "Description": "Demo cluster with replicas",
aws_1 | "Status": "available",
aws_1 | "CacheNodeType": "cache.t3.micro",
aws_1 | "ARN": "arn:aws:elasticache:us-east-1:000000000000:replicationgroup:penta-redis"
aws_1 | }
aws_1 | }
```
but calling on redis-cli dosent seem to work .
no success message is deliverd or port is informed
awslocal elasticache create-cache-cluster . this works but it creates an elasticache (cluster mode disabled ) with a single priamry node and multiple replicas
### Expected Behavior
awslocal elasticache create-replication-group should create a elasticache (cluster mode enabled) with specifed nodes and replicas
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
`awslocal elasticache create-replication-group --replication-group-id demo-redis --replication-group-description "Demo cluster with replicas" --num-node-groups 3 --replicas-per-node-group 2 --cache-node-type cache.t3.micro --cache-parameter-group default.redis6.x.cluster.on --engine redis --engine-version 6.x --security-group-ids xxxxxx --cache-subnet-group-name xxxxxx`
compose file
```
aws:
build: ./localstack
ports:
- "443:443"
- "4566:4566"
- "4571:4571"
environment:
SERVICES: 's3,sqs,sns,elasticache'
DEBUG: '1'
LOCALSTACK_API_KEY: xxxxxxxx
```
### Environment
```markdown
- OS: ubuntu latest
- LocalStack: latest
```
### Anything else?
_No response_ | priority | bug elasticache redis cluster mode enabled not created in localstac sucessfully is there an existing issue for this i have searched the existing issues current behavior i am trying to create a elasticache cluster wuith redis cluster mode enabled it basically should create a cluster with n primary nodes and each node having m replicas aws elasticache create replication group was tried and it ran successfully but replacing aws with aws local was not a success i see the output as aws replicationgroup aws replicationgroupid penta redis aws description demo cluster with replicas aws status available aws cachenodetype cache micro aws arn arn aws elasticache us east replicationgroup penta redis aws aws but calling on redis cli dosent seem to work no success message is deliverd or port is informed awslocal elasticache create cache cluster this works but it creates an elasticache cluster mode disabled with a single priamry node and multiple replicas expected behavior awslocal elasticache create replication group should create a elasticache cluster mode enabled with specifed nodes and replicas how are you starting localstack with a docker compose file steps to reproduce how are you starting localstack e g bin localstack command arguments or docker compose yml docker run localstack localstack client commands e g aws sdk code snippet or sequence of awslocal commands awslocal elasticache create replication group replication group id demo redis replication group description demo cluster with replicas num node groups replicas per node group cache node type cache micro cache parameter group default x cluster on engine redis engine version x security group ids xxxxxx cache subnet group name xxxxxx compose file aws build localstack ports environment services sqs sns elasticache debug localstack api key xxxxxxxx environment markdown os ubuntu latest localstack latest anything else no response | 1 |
42,792 | 2,874,125,297 | IssuesEvent | 2015-06-08 20:46:01 | phetsims/tasks | https://api.github.com/repos/phetsims/tasks | closed | Update sim info content- topics and learning goals | High Priority Misc | For any sims you have claimed (or have been claimed for you @ycarpenterphet ), review the Topics and Learning Goals on the current HTML5 sim page. If they look appropriate, keep them. If they need updating, log in as an administrator and make the necessary changes. After it is reviewed and/or updated, check it off using [this spreadsheet] (https://docs.google.com/spreadsheets/d/19v98MXCCbfXF6gKOOtoTQ6jNvrGxHb02yKzzwD3BALQ/edit#).
@ariel-phet @oliver-phet @arouinfar | 1.0 | Update sim info content- topics and learning goals - For any sims you have claimed (or have been claimed for you @ycarpenterphet ), review the Topics and Learning Goals on the current HTML5 sim page. If they look appropriate, keep them. If they need updating, log in as an administrator and make the necessary changes. After it is reviewed and/or updated, check it off using [this spreadsheet] (https://docs.google.com/spreadsheets/d/19v98MXCCbfXF6gKOOtoTQ6jNvrGxHb02yKzzwD3BALQ/edit#).
@ariel-phet @oliver-phet @arouinfar | priority | update sim info content topics and learning goals for any sims you have claimed or have been claimed for you ycarpenterphet review the topics and learning goals on the current sim page if they look appropriate keep them if they need updating log in as an administrator and make the necessary changes after it is reviewed and or updated check it off using ariel phet oliver phet arouinfar | 1 |
80,081 | 3,550,496,730 | IssuesEvent | 2016-01-20 22:13:04 | INN/Largo | https://api.github.com/repos/INN/Largo | opened | add a "none" option for the top term | priority: high type: improvement | currently we don't give people the option to not set a top term (even if it's something lame like "uncategorized"). We should add a "none" option to the dropdown menu in the admin and account for this in the front end display of the top tag (by just not returning any of the markup). | 1.0 | add a "none" option for the top term - currently we don't give people the option to not set a top term (even if it's something lame like "uncategorized"). We should add a "none" option to the dropdown menu in the admin and account for this in the front end display of the top tag (by just not returning any of the markup). | priority | add a none option for the top term currently we don t give people the option to not set a top term even if it s something lame like uncategorized we should add a none option to the dropdown menu in the admin and account for this in the front end display of the top tag by just not returning any of the markup | 1 |
252,848 | 8,047,357,540 | IssuesEvent | 2018-08-01 00:10:01 | osulp/Scholars-Archive | https://api.github.com/repos/osulp/Scholars-Archive | closed | Metadata fields should always be editable for Administrators | Priority: High | Currently, some metadata fields are not editable (when values exist, they're locked as readonly):
- [ ] Location
- [ ] Related Items
Example:
https://ir.library.oregonstate.edu/concern/technical_reports/5q47rt93r/edit?locale=en | 1.0 | Metadata fields should always be editable for Administrators - Currently, some metadata fields are not editable (when values exist, they're locked as readonly):
- [ ] Location
- [ ] Related Items
Example:
https://ir.library.oregonstate.edu/concern/technical_reports/5q47rt93r/edit?locale=en | priority | metadata fields should always be editable for administrators currently some metadata fields are not editable when values exist they re locked as readonly location related items example | 1 |
466,923 | 13,437,279,194 | IssuesEvent | 2020-09-07 15:39:39 | OpenSRP/opensrp-server-core | https://api.github.com/repos/OpenSRP/opensrp-server-core | closed | Implement Queuing for Server side plan evaluation | Dynamic Tasking Priority: High question | We need Implement Queuing for Server side plan evaluation.
This is because plan evaluation may be evaluating a plan that maybe be applicable in all jurisdictions in a country.
It has been decided that we shall rabbitMQ for queuing
We need to add a database table that will track the jobs posted to RabbitMQ. The information in the database table will allow queues to resume in case the queue is cleared for any reason while there were tasks queued up that were not completed
We need to decide the unit of a job to be queued by rabbitMQ
1. Use entity as unit of job
2. Use jurisdiction and action as a unit of job
**1. Use entity as unit**
Using entity will mean we have as many jobs queued as per the entities targeted in the plan e.g all residential structures, all families, all family members, and all tasks that are applicable by the plan.
This has the advantages in that in case of restart we shall not process any tasks for entities already completed.
**2. Use jurisdiction and entity as unit**
This will mean that we have jobs queued for each jurisdiction that is applicable for the plan. We could break down so that we have a job per action and jurisdiction. This will imply that each job will be handling an action in the plan per jurisdiction
This has the advantages in that we can track task generation by jurisdictions. In case jurisdictions have a lot of structures during any restart this would mean restarting evaluation for jobs queued that were partially completed
When all the evaluation is complete and all rabbitMQ tasks are complete, we need to purge the jobs table
| 1.0 | Implement Queuing for Server side plan evaluation - We need Implement Queuing for Server side plan evaluation.
This is because plan evaluation may be evaluating a plan that maybe be applicable in all jurisdictions in a country.
It has been decided that we shall rabbitMQ for queuing
We need to add a database table that will track the jobs posted to RabbitMQ. The information in the database table will allow queues to resume in case the queue is cleared for any reason while there were tasks queued up that were not completed
We need to decide the unit of a job to be queued by rabbitMQ
1. Use entity as unit of job
2. Use jurisdiction and action as a unit of job
**1. Use entity as unit**
Using entity will mean we have as many jobs queued as per the entities targeted in the plan e.g all residential structures, all families, all family members, and all tasks that are applicable by the plan.
This has the advantages in that in case of restart we shall not process any tasks for entities already completed.
**2. Use jurisdiction and entity as unit**
This will mean that we have jobs queued for each jurisdiction that is applicable for the plan. We could break down so that we have a job per action and jurisdiction. This will imply that each job will be handling an action in the plan per jurisdiction
This has the advantages in that we can track task generation by jurisdictions. In case jurisdictions have a lot of structures during any restart this would mean restarting evaluation for jobs queued that were partially completed
When all the evaluation is complete and all rabbitMQ tasks are complete, we need to purge the jobs table
| priority | implement queuing for server side plan evaluation we need implement queuing for server side plan evaluation this is because plan evaluation may be evaluating a plan that maybe be applicable in all jurisdictions in a country it has been decided that we shall rabbitmq for queuing we need to add a database table that will track the jobs posted to rabbitmq the information in the database table will allow queues to resume in case the queue is cleared for any reason while there were tasks queued up that were not completed we need to decide the unit of a job to be queued by rabbitmq use entity as unit of job use jurisdiction and action as a unit of job use entity as unit using entity will mean we have as many jobs queued as per the entities targeted in the plan e g all residential structures all families all family members and all tasks that are applicable by the plan this has the advantages in that in case of restart we shall not process any tasks for entities already completed use jurisdiction and entity as unit this will mean that we have jobs queued for each jurisdiction that is applicable for the plan we could break down so that we have a job per action and jurisdiction this will imply that each job will be handling an action in the plan per jurisdiction this has the advantages in that we can track task generation by jurisdictions in case jurisdictions have a lot of structures during any restart this would mean restarting evaluation for jobs queued that were partially completed when all the evaluation is complete and all rabbitmq tasks are complete we need to purge the jobs table | 1 |
88,643 | 3,783,468,611 | IssuesEvent | 2016-03-19 05:07:25 | cs2103jan2016-w10-4j/main | https://api.github.com/repos/cs2103jan2016-w10-4j/main | closed | Handler - Storage | priority.high type.task | Handler @berkinbarut
pass string[] to storage instead of ArrayList<Task>
Storage @yunshian
pass String[] to handler instead of ArrayList<Task> | 1.0 | Handler - Storage - Handler @berkinbarut
pass string[] to storage instead of ArrayList<Task>
Storage @yunshian
pass String[] to handler instead of ArrayList<Task> | priority | handler storage handler berkinbarut pass string to storage instead of arraylist storage yunshian pass string to handler instead of arraylist | 1 |
648,826 | 21,194,915,392 | IssuesEvent | 2022-04-08 22:34:42 | GoogleCloudPlatform/asl-ml-immersion | https://api.github.com/repos/GoogleCloudPlatform/asl-ml-immersion | opened | pre-commit is failing because of new version of black | bug priority:high | As seen in the recent commit by @takumiohym pre-commit is failing because of this error:
```Traceback (most recent call last):
File "/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/bin/black", line 8, in <module>
sys.exit(patched_main())
File "/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/lib/python3.7/site-packages/black/__init__.py", line 1423, in patched_main
patch_click()
File "/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/lib/python3.7/site-packages/black/__init__.py", line 1409, in patch_click
from click import _unicodefun
ImportError: cannot import name '_unicodefun' from 'click' (/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/lib/python3.7/site-packages/click/__init__.py)```
The way to fix this is to update black from `22.1.0` to `22.3.0` | 1.0 | pre-commit is failing because of new version of black - As seen in the recent commit by @takumiohym pre-commit is failing because of this error:
```Traceback (most recent call last):
File "/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/bin/black", line 8, in <module>
sys.exit(patched_main())
File "/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/lib/python3.7/site-packages/black/__init__.py", line 1423, in patched_main
patch_click()
File "/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/lib/python3.7/site-packages/black/__init__.py", line 1409, in patch_click
from click import _unicodefun
ImportError: cannot import name '_unicodefun' from 'click' (/home/jupyter/.cache/pre-commit/repoxvdpejka/py_env-python3.7/lib/python3.7/site-packages/click/__init__.py)```
The way to fix this is to update black from `22.1.0` to `22.3.0` | priority | pre commit is failing because of new version of black as seen in the recent commit by takumiohym pre commit is failing because of this error traceback most recent call last file home jupyter cache pre commit repoxvdpejka py env bin black line in sys exit patched main file home jupyter cache pre commit repoxvdpejka py env lib site packages black init py line in patched main patch click file home jupyter cache pre commit repoxvdpejka py env lib site packages black init py line in patch click from click import unicodefun importerror cannot import name unicodefun from click home jupyter cache pre commit repoxvdpejka py env lib site packages click init py the way to fix this is to update black from to | 1 |
795,124 | 28,062,597,693 | IssuesEvent | 2023-03-29 13:32:23 | AY2223S2-CS2103T-W09-4/tp | https://api.github.com/repos/AY2223S2-CS2103T-W09-4/tp | closed | Fix batchadd and batchexport bug | type.Bug priority.High | Fix batchadd and batchexport bug to take in multiple tags
| 1.0 | Fix batchadd and batchexport bug - Fix batchadd and batchexport bug to take in multiple tags
| priority | fix batchadd and batchexport bug fix batchadd and batchexport bug to take in multiple tags | 1 |
453,203 | 13,066,415,696 | IssuesEvent | 2020-07-30 21:39:08 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Geology field behavior in data entry has a bug | Function-DataEntry/Bulkloading Priority-High | We have a group of fossil shells we are databasing and want to add formation, epoch etc. The handbook doesn't address how to get Geology Attributes available for a collection. Despite selecting "Show" for Geology Attributes in the Customize Form tool on Data Entry, we can't see them. Is there some authorization that we need to see these fields? I see that other collections are able to do this - see http://arctos.database.museum/guid/UNM:ES:13067.
**Priority**
I would like to have this resolved by date: 17 Dec 2018 when we start databasing these specimens
| 1.0 | Geology field behavior in data entry has a bug - We have a group of fossil shells we are databasing and want to add formation, epoch etc. The handbook doesn't address how to get Geology Attributes available for a collection. Despite selecting "Show" for Geology Attributes in the Customize Form tool on Data Entry, we can't see them. Is there some authorization that we need to see these fields? I see that other collections are able to do this - see http://arctos.database.museum/guid/UNM:ES:13067.
**Priority**
I would like to have this resolved by date: 17 Dec 2018 when we start databasing these specimens
| priority | geology field behavior in data entry has a bug we have a group of fossil shells we are databasing and want to add formation epoch etc the handbook doesn t address how to get geology attributes available for a collection despite selecting show for geology attributes in the customize form tool on data entry we can t see them is there some authorization that we need to see these fields i see that other collections are able to do this see priority i would like to have this resolved by date dec when we start databasing these specimens | 1 |
365,742 | 10,791,174,245 | IssuesEvent | 2019-11-05 16:10:40 | AY1920S1-CS2113T-W12-3/main | https://api.github.com/repos/AY1920S1-CS2113T-W12-3/main | closed | As a hall resident, I can cancel booking of the facility | priority.High type.Story | So that I can free up the room for others | 1.0 | As a hall resident, I can cancel booking of the facility - So that I can free up the room for others | priority | as a hall resident i can cancel booking of the facility so that i can free up the room for others | 1 |
788,182 | 27,746,085,645 | IssuesEvent | 2023-03-15 17:07:52 | Satellite-im/Uplink | https://api.github.com/repos/Satellite-im/Uplink | closed | Settings - Wire up "Compress & Download Cache" | Settings High Priority | **Task:** Like Title says we want to wire up Compress and Download Cache in Settings/Developer. We should include a notification when User clicks button
**Screenshot:**
<img width="952" alt="Screenshot 2023-02-01 at 6 26 52 PM" src="https://user-images.githubusercontent.com/93608357/216190590-ff0d1c37-4725-430e-b142-d6cc519d49ec.png">
| 1.0 | Settings - Wire up "Compress & Download Cache" - **Task:** Like Title says we want to wire up Compress and Download Cache in Settings/Developer. We should include a notification when User clicks button
**Screenshot:**
<img width="952" alt="Screenshot 2023-02-01 at 6 26 52 PM" src="https://user-images.githubusercontent.com/93608357/216190590-ff0d1c37-4725-430e-b142-d6cc519d49ec.png">
| priority | settings wire up compress download cache task like title says we want to wire up compress and download cache in settings developer we should include a notification when user clicks button screenshot img width alt screenshot at pm src | 1 |
236,928 | 7,753,678,052 | IssuesEvent | 2018-05-31 02:06:57 | Gloirin/m2gTest | https://api.github.com/repos/Gloirin/m2gTest | closed | 0006956:
define default value for boolean db fields | Felamimail high priority | **Reported by pschuele on 16 Aug 2012 12:45**
**Version:** Joey (2012.10.1~alpha1)
define default value for boolean db fields
**Additional information:** see https://gerrit.tine20.org/tine20/#/c/916
| 1.0 | 0006956:
define default value for boolean db fields - **Reported by pschuele on 16 Aug 2012 12:45**
**Version:** Joey (2012.10.1~alpha1)
define default value for boolean db fields
**Additional information:** see https://gerrit.tine20.org/tine20/#/c/916
| priority | define default value for boolean db fields reported by pschuele on aug version joey define default value for boolean db fields additional information see | 1 |
274,795 | 8,567,961,240 | IssuesEvent | 2018-11-10 16:58:33 | CS2113-AY1819S1-W13-2/main | https://api.github.com/repos/CS2113-AY1819S1-W13-2/main | closed | As a diver I want to be able to tell my current pressure group without performing manual calculations. | priority.high type.epic type.story | Be able to calculate dive pressure groups automatically. | 1.0 | As a diver I want to be able to tell my current pressure group without performing manual calculations. - Be able to calculate dive pressure groups automatically. | priority | as a diver i want to be able to tell my current pressure group without performing manual calculations be able to calculate dive pressure groups automatically | 1 |
427,120 | 12,393,145,694 | IssuesEvent | 2020-05-20 15:00:07 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Update of current styles - Identify tool update | Accepted Priority: High Project: C028 | ## Description
https://github.com/geosolutions-it/MapStore2-C028/issues/94
## How to reproduce
https://github.com/geosolutions-it/MapStore2-C028/issues/94
| 1.0 | Update of current styles - Identify tool update - ## Description
https://github.com/geosolutions-it/MapStore2-C028/issues/94
## How to reproduce
https://github.com/geosolutions-it/MapStore2-C028/issues/94
| priority | update of current styles identify tool update description how to reproduce | 1 |
190,127 | 6,810,165,858 | IssuesEvent | 2017-11-05 01:55:41 | Wuzzy2/MineClone2-Bugs | https://api.github.com/repos/Wuzzy2/MineClone2-Bugs | closed | Compass crash? | bug CRITICAL HIGH PRIORITY items | ``` 2017-10-15 13:50:02: ERROR[Main]: ServerError: AsyncErr: environment_Step: Runtime error from mod 'mcl_compass' in callback environment_Step(): ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: attempt to perform arithmetic on field 'x' (a nil value)
2017-10-15 13:50:02: ERROR[Main]: stack traceback:
2017-10-15 13:50:02: ERROR[Main]: ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: in function <...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:13>
2017-10-15 13:50:02: ERROR[Main]: /usr/share/games/minetest/builtin/game/register.lua:412: in function </usr/share/games/minetest/builtin/game/register.lua:392>
2017-10-15 13:50:02: ERROR[Main]: stack traceback:
``` | 1.0 | Compass crash? - ``` 2017-10-15 13:50:02: ERROR[Main]: ServerError: AsyncErr: environment_Step: Runtime error from mod 'mcl_compass' in callback environment_Step(): ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: attempt to perform arithmetic on field 'x' (a nil value)
2017-10-15 13:50:02: ERROR[Main]: stack traceback:
2017-10-15 13:50:02: ERROR[Main]: ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: in function <...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:13>
2017-10-15 13:50:02: ERROR[Main]: /usr/share/games/minetest/builtin/game/register.lua:412: in function </usr/share/games/minetest/builtin/game/register.lua:392>
2017-10-15 13:50:02: ERROR[Main]: stack traceback:
``` | priority | compass crash error servererror asyncerr environment step runtime error from mod mcl compass in callback environment step inetest games mods items mcl compass init lua attempt to perform arithmetic on field x a nil value error stack traceback error inetest games mods items mcl compass init lua in function error usr share games minetest builtin game register lua in function error stack traceback | 1 |
211,257 | 7,199,662,081 | IssuesEvent | 2018-02-05 16:34:40 | ropensci/drake | https://api.github.com/repos/ropensci/drake | opened | Do not write to the cache anywhere in drake_build() | high priority internals | `drake_build()` should just return a target's value, and it should not cache errors or progress on its own. This is especially important for jobs deployed to remote locations that have no access to the cache. Transferring data should be the responsibility of the `future` package. | 1.0 | Do not write to the cache anywhere in drake_build() - `drake_build()` should just return a target's value, and it should not cache errors or progress on its own. This is especially important for jobs deployed to remote locations that have no access to the cache. Transferring data should be the responsibility of the `future` package. | priority | do not write to the cache anywhere in drake build drake build should just return a target s value and it should not cache errors or progress on its own this is especially important for jobs deployed to remote locations that have no access to the cache transferring data should be the responsibility of the future package | 1 |
264,537 | 8,316,422,852 | IssuesEvent | 2018-09-25 09:00:28 | openfaas/openfaas-cloud | https://api.github.com/repos/openfaas/openfaas-cloud | closed | Research: How do we make logs accessible to users? | enhancement help wanted priority/high | ## Feature
There are a couple of scenarios (at least) where we need to make logs available to users:
* Docker build fails for reason X
This should be made available and the GitHub status API only provides a short summary
* git-tar operation fails due to any number of reasons
If the message is short we can store this in the GitHub status
## Related but out of scope in this issue:
* Exposing function logs to users
## Constraints
Can we do this without installing, maintaining and continually migrating a stateful database?
## Potential implementations
* GitHub Gists in user account
* Separate branch in source repository to hold logs
* Separate GitHub repo that we or the user owns
* Use GitHub issues raised in the user's repo
* One GitHub issue and many comments
Or something else completely.
The key point is that we get feedback to the user as a next step, rather than having no logs, how can we get to some logs. | 1.0 | Research: How do we make logs accessible to users? - ## Feature
There are a couple of scenarios (at least) where we need to make logs available to users:
* Docker build fails for reason X
This should be made available and the GitHub status API only provides a short summary
* git-tar operation fails due to any number of reasons
If the message is short we can store this in the GitHub status
## Related but out of scope in this issue:
* Exposing function logs to users
## Constraints
Can we do this without installing, maintaining and continually migrating a stateful database?
## Potential implementations
* GitHub Gists in user account
* Separate branch in source repository to hold logs
* Separate GitHub repo that we or the user owns
* Use GitHub issues raised in the user's repo
* One GitHub issue and many comments
Or something else completely.
The key point is that we get feedback to the user as a next step, rather than having no logs, how can we get to some logs. | priority | research how do we make logs accessible to users feature there are a couple of scenarios at least where we need to make logs available to users docker build fails for reason x this should be made available and the github status api only provides a short summary git tar operation fails due to any number of reasons if the message is short we can store this in the github status related but out of scope in this issue exposing function logs to users constraints can we do this without installing maintaining and continually migrating a stateful database potential implementations github gists in user account separate branch in source repository to hold logs separate github repo that we or the user owns use github issues raised in the user s repo one github issue and many comments or something else completely the key point is that we get feedback to the user as a next step rather than having no logs how can we get to some logs | 1 |
216,179 | 7,301,942,992 | IssuesEvent | 2018-02-27 07:53:50 | SANBIBiodiversityforLife/species | https://api.github.com/repos/SANBIBiodiversityforLife/species | opened | Seakeys threat import | high-priority | For the NBA, Dewidine needs to have threat codes in the database for marine species. Unfortunately this data wasn't done when the seakeys pages were written or when the marine assessments were done. So Prideel has assigned an intern to look over all of the threat narratives and add coding. The easiest way for her to do it was to use a spreadsheet - they didn't want to use a web interface. Now I have to write a script to take their threat codings and import it into the database. Attached is an example spreadsheet with some data filled in.
[Seakeys_ThreatData_Capture - prideel.xlsx](https://github.com/SANBIBiodiversityforLife/species/files/1761705/Seakeys_ThreatData_Capture.-.prideel.xlsx)
| 1.0 | Seakeys threat import - For the NBA, Dewidine needs to have threat codes in the database for marine species. Unfortunately this data wasn't done when the seakeys pages were written or when the marine assessments were done. So Prideel has assigned an intern to look over all of the threat narratives and add coding. The easiest way for her to do it was to use a spreadsheet - they didn't want to use a web interface. Now I have to write a script to take their threat codings and import it into the database. Attached is an example spreadsheet with some data filled in.
[Seakeys_ThreatData_Capture - prideel.xlsx](https://github.com/SANBIBiodiversityforLife/species/files/1761705/Seakeys_ThreatData_Capture.-.prideel.xlsx)
| priority | seakeys threat import for the nba dewidine needs to have threat codes in the database for marine species unfortunately this data wasn t done when the seakeys pages were written or when the marine assessments were done so prideel has assigned an intern to look over all of the threat narratives and add coding the easiest way for her to do it was to use a spreadsheet they didn t want to use a web interface now i have to write a script to take their threat codings and import it into the database attached is an example spreadsheet with some data filled in | 1 |
517,651 | 15,017,558,473 | IssuesEvent | 2021-02-01 11:00:33 | Neural-Systems-at-UIO/VisuAlign | https://api.github.com/repos/Neural-Systems-at-UIO/VisuAlign | closed | LocaliZoom links (nonlinear) | High Priority enhancement | I have two datasets for which I would like to have LocaliZoom links for the nonlinear registration.
The JSON files can be found here: Z:\HBP_Curation\QuickNII_projects\Projects\Finished projects\Atlas_of_parvalbumin_and_somatostatin\Nonlin_JSONs and the corresponding Navigator blocks are:
1301_4184_6006 (rat 25203)
1301_4224_6066 (rat 25204)
1301_4204_6046 (rat 25205)
1301_4164_5986 (rat 25206)
1301_4244_6086 (mouse 81264)
1301_4247_6091 (mouse 81265)
1301_4245_6089 (mouse 81266)
1301_4246_6090 (mouse 81267)
Thanks!
Ingvild
| 1.0 | LocaliZoom links (nonlinear) - I have two datasets for which I would like to have LocaliZoom links for the nonlinear registration.
The JSON files can be found here: Z:\HBP_Curation\QuickNII_projects\Projects\Finished projects\Atlas_of_parvalbumin_and_somatostatin\Nonlin_JSONs and the corresponding Navigator blocks are:
1301_4184_6006 (rat 25203)
1301_4224_6066 (rat 25204)
1301_4204_6046 (rat 25205)
1301_4164_5986 (rat 25206)
1301_4244_6086 (mouse 81264)
1301_4247_6091 (mouse 81265)
1301_4245_6089 (mouse 81266)
1301_4246_6090 (mouse 81267)
Thanks!
Ingvild
| priority | localizoom links nonlinear i have two datasets for which i would like to have localizoom links for the nonlinear registration the json files can be found here z hbp curation quicknii projects projects finished projects atlas of parvalbumin and somatostatin nonlin jsons and the corresponding navigator blocks are rat rat rat rat mouse mouse mouse mouse thanks ingvild | 1 |
809,119 | 30,175,001,463 | IssuesEvent | 2023-07-04 03:10:33 | reactive-python/reactpy | https://api.github.com/repos/reactive-python/reactpy | closed | Model's JSON Pointer Not Updated | type-bug priority-1-high release-patch | We are not appropriately updating the location of a given component within the overall VDOM when it changes position. [This particular line](https://github.com/reactive-python/reactpy/blob/f065655ae1fc8f93a0ca05769be19e304f607dfa/src/py/reactpy/reactpy/core/layout.py#L492) is at fault - it just blindly copies the path from the old model instead of recomputing that path given the new parent and index (`f"{new_parent.patch_path}/children/{new_index}"`).
### Discussed in https://github.com/reactive-python/reactpy/discussions/1081
<div type='discussions-op-text'>
<sup>Originally posted by **numpde** July 2, 2023</sup>
I'm confused about what's happening here. Consider this App:
```Python
from reactpy import component, use_state
from reactpy.html import div, button
from reactpy.core.types import State
@component
def Item(item: str, items: State):
color = use_state(None)
def deleteme(event):
items.set_value(lambda items: [i for i in items if (i != item)])
def colorize(event):
color.set_value(lambda c: "blue" if not c else None)
return div(
{'style': {'background-color': color.value or "transparent", 'padding': "5px"}},
div(
button({'onClick': colorize}, f"Color {item}"),
button({'onClick': deleteme}, f"Delete {item}"),
),
)
@component
def App():
items = use_state(["A", "B", "C"])
return div(
[
Item(item, items, key=item)
for item in items.value
]
)
```
I embed it from a Django template. [Here's](https://playcode.io/1521983) the equivalent ReactJS implementation for reference.
Now, when I delete **Item A** then click to color **Item B**, then unexpectedly, **Item C** appears overwritten.
reactpy==1.0.1
reactpy-django==3.2.0
</div>
| 1.0 | Model's JSON Pointer Not Updated - We are not appropriately updating the location of a given component within the overall VDOM when it changes position. [This particular line](https://github.com/reactive-python/reactpy/blob/f065655ae1fc8f93a0ca05769be19e304f607dfa/src/py/reactpy/reactpy/core/layout.py#L492) is at fault - it just blindly copies the path from the old model instead of recomputing that path given the new parent and index (`f"{new_parent.patch_path}/children/{new_index}"`).
### Discussed in https://github.com/reactive-python/reactpy/discussions/1081
<div type='discussions-op-text'>
<sup>Originally posted by **numpde** July 2, 2023</sup>
I'm confused about what's happening here. Consider this App:
```Python
from reactpy import component, use_state
from reactpy.html import div, button
from reactpy.core.types import State
@component
def Item(item: str, items: State):
color = use_state(None)
def deleteme(event):
items.set_value(lambda items: [i for i in items if (i != item)])
def colorize(event):
color.set_value(lambda c: "blue" if not c else None)
return div(
{'style': {'background-color': color.value or "transparent", 'padding': "5px"}},
div(
button({'onClick': colorize}, f"Color {item}"),
button({'onClick': deleteme}, f"Delete {item}"),
),
)
@component
def App():
items = use_state(["A", "B", "C"])
return div(
[
Item(item, items, key=item)
for item in items.value
]
)
```
I embed it from a Django template. [Here's](https://playcode.io/1521983) the equivalent ReactJS implementation for reference.
Now, when I delete **Item A** then click to color **Item B**, then unexpectedly, **Item C** appears overwritten.
reactpy==1.0.1
reactpy-django==3.2.0
</div>
| priority | model s json pointer not updated we are not appropriately updating the location of a given component within the overall vdom when it changes position is at fault it just blindly copies the path from the old model instead of recomputing that path given the new parent and index f new parent patch path children new index discussed in originally posted by numpde july i m confused about what s happening here consider this app python from reactpy import component use state from reactpy html import div button from reactpy core types import state component def item item str items state color use state none def deleteme event items set value lambda items def colorize event color set value lambda c blue if not c else none return div style background color color value or transparent padding div button onclick colorize f color item button onclick deleteme f delete item component def app items use state return div item item items key item for item in items value i embed it from a django template the equivalent reactjs implementation for reference now when i delete item a then click to color item b then unexpectedly item c appears overwritten reactpy reactpy django | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.