Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
384,735 | 11,402,487,999 | IssuesEvent | 2020-01-31 03:25:02 | PlasmaPy/PlasmaPy | https://api.github.com/repos/PlasmaPy/PlasmaPy | opened | Create plasmapy.analysis [PLEP 7] | Changes existing API Feature request Needs change log entry Priority: medium Refactoring | As part of the changes associated with the forthcoming PLEP 7 (PlasmaPy/PlasmaPy-PLEPs#26), we decided create a ``plasmapy.analysis`` sub-package.
This sub-package is intended to be your analysis toolbox, whereas, the ``plasmapy.diagnostics`` sub-package is your tool organizer. Consider a swept langmuir probe diagnostic. The functions to calculate ion-saturation current, electron temperature, etc. would be contained in ``plasmapy.analysis``. The class that fully defines the parameters of a swept langmuir probe (collection area, orientation, collector type, etc.) and provides access the the associated analysis tools would resided in ``plasmapy.diagnostics``. | 1.0 | Create plasmapy.analysis [PLEP 7] - As part of the changes associated with the forthcoming PLEP 7 (PlasmaPy/PlasmaPy-PLEPs#26), we decided create a ``plasmapy.analysis`` sub-package.
This sub-package is intended to be your analysis toolbox, whereas, the ``plasmapy.diagnostics`` sub-package is your tool organizer. Consider a swept langmuir probe diagnostic. The functions to calculate ion-saturation current, electron temperature, etc. would be contained in ``plasmapy.analysis``. The class that fully defines the parameters of a swept langmuir probe (collection area, orientation, collector type, etc.) and provides access the the associated analysis tools would resided in ``plasmapy.diagnostics``. | priority | create plasmapy analysis as part of the changes associated with the forthcoming plep plasmapy plasmapy pleps we decided create a plasmapy analysis sub package this sub package is intended to be your analysis toolbox whereas the plasmapy diagnostics sub package is your tool organizer consider a swept langmuir probe diagnostic the functions to calculate ion saturation current electron temperature etc would be contained in plasmapy analysis the class that fully defines the parameters of a swept langmuir probe collection area orientation collector type etc and provides access the the associated analysis tools would resided in plasmapy diagnostics | 1 |
88,098 | 3,771,482,610 | IssuesEvent | 2016-03-16 17:43:49 | ngageoint/hootenanny-ui | https://api.github.com/repos/ngageoint/hootenanny-ui | reopened | Basemap upload ignores defined Basemap Name | Category: UI Priority: High Priority: Medium Type: Bug | If you specify a Basemap Name for custom basemap it does not use that name but rather saves it using the file name. | 2.0 | Basemap upload ignores defined Basemap Name - If you specify a Basemap Name for custom basemap it does not use that name but rather saves it using the file name. | priority | basemap upload ignores defined basemap name if you specify a basemap name for custom basemap it does not use that name but rather saves it using the file name | 1 |
149,733 | 5,724,794,055 | IssuesEvent | 2017-04-20 15:14:06 | Osslack/HANA_SSBM | https://api.github.com/repos/Osslack/HANA_SSBM | closed | Unterschied in Laufzeit bei mehreren Befehlen gleichzeitig oder nacheinander. | Priority_medium | Macht es einen unterschied, ob man q1 oder q1.1,q1.2,q1.3 ausführt? | 1.0 | Unterschied in Laufzeit bei mehreren Befehlen gleichzeitig oder nacheinander. - Macht es einen unterschied, ob man q1 oder q1.1,q1.2,q1.3 ausführt? | priority | unterschied in laufzeit bei mehreren befehlen gleichzeitig oder nacheinander macht es einen unterschied ob man oder ausführt | 1 |
593,698 | 18,014,503,610 | IssuesEvent | 2021-09-16 12:32:38 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | (Creature): Bosses do not reset correctly [phase+respawn] | Priority-Medium Confirmed | ### Current Behaviour
All bosses run back to their spawn point once a reset is called
### Expected Blizzlike Behaviour
There are some bosses that do a simple reset and other "more complex" bosses do a "phased reset" and then respawn.
**Seems** there is a pattern for which bosses do a simple or phased reset as I will describe below:
### What bosses should run back to the spawn point at reset?
Simple bosses.

- No environmental changes
- No transformations
- No transportations from/to high ground
Examples: Sapphiron & Patchwerk
### What bosses should disappear and respawn at reset?
- Transformations like Professor Putricide in Icecrown Citadel
- Special teleports to a balcony or other movements to large areas like Gothik in Naxxramas
- Mechanics that change the environment like Mimiron in Ulduar
All other bosses with transformations, high ground teleports, and with mechanics that change environment should disappear (phasing) and respawn after some time (amount of time is unknown)
### Source
All sources from retail *
= Simple reset =
https://youtu.be/C5UzzD8SfPo?t=140
https://youtu.be/PMYg2cxS9bA?t=200
https://youtu.be/oX0PnQrjF0A?t=305
https://youtu.be/kjFabmZSsPw?t=50
https://youtu.be/fN_GElMn__k?t=105
= Phased reset =
https://youtu.be/3kvpRJPGx8c?t=204
https://youtu.be/-jWBa33m8FE?t=180
https://youtu.be/j3KvFCmmsqM?t=376
https://youtu.be/54ynIY7Ntwo?t=106
https://youtu.be/yULPhV2qcgA?t=340
https://youtu.be/g2JEnecZD5w?t=42
https://youtu.be/DTBT-55ZHbk?t=265
- This is from Forge of Souls - so it **also applies to dungeons**: https://youtu.be/4MysXz8EY4I?t=195
- This is from Pit of Saron, on Scourgelord Tyrannus : https://youtu.be/qnDRFToSspw?t=70
### Steps to reproduce the problem
1. Engage a boss
2. Reset by dying or using ".gm on"
3. Boss will run back to the spawn point
### Extra Notes
_No response_
### AC rev. hash/commit
efdb64a
### Operating system
Windows 10
### Custom changes or Modules
None | 1.0 | (Creature): Bosses do not reset correctly [phase+respawn] - ### Current Behaviour
All bosses run back to their spawn point once a reset is called
### Expected Blizzlike Behaviour
There are some bosses that do a simple reset and other "more complex" bosses do a "phased reset" and then respawn.
**Seems** there is a pattern for which bosses do a simple or phased reset as I will describe below:
### What bosses should run back to the spawn point at reset?
Simple bosses.

- No environmental changes
- No transformations
- No transportations from/to high ground
Examples: Sapphiron & Patchwerk
### What bosses should disappear and respawn at reset?
- Transformations like Professor Putricide in Icecrown Citadel
- Special teleports to a balcony or other movements to large areas like Gothik in Naxxramas
- Mechanics that change the environment like Mimiron in Ulduar
All other bosses with transformations, high ground teleports, and with mechanics that change environment should disappear (phasing) and respawn after some time (amount of time is unknown)
### Source
All sources from retail *
= Simple reset =
https://youtu.be/C5UzzD8SfPo?t=140
https://youtu.be/PMYg2cxS9bA?t=200
https://youtu.be/oX0PnQrjF0A?t=305
https://youtu.be/kjFabmZSsPw?t=50
https://youtu.be/fN_GElMn__k?t=105
= Phased reset =
https://youtu.be/3kvpRJPGx8c?t=204
https://youtu.be/-jWBa33m8FE?t=180
https://youtu.be/j3KvFCmmsqM?t=376
https://youtu.be/54ynIY7Ntwo?t=106
https://youtu.be/yULPhV2qcgA?t=340
https://youtu.be/g2JEnecZD5w?t=42
https://youtu.be/DTBT-55ZHbk?t=265
- This is from Forge of Souls - so it **also applies to dungeons**: https://youtu.be/4MysXz8EY4I?t=195
- This is from Pit of Saron, on Scourgelord Tyrannus : https://youtu.be/qnDRFToSspw?t=70
### Steps to reproduce the problem
1. Engage a boss
2. Reset by dying or using ".gm on"
3. Boss will run back to the spawn point
### Extra Notes
_No response_
### AC rev. hash/commit
efdb64a
### Operating system
Windows 10
### Custom changes or Modules
None | priority | creature bosses do not reset correctly current behaviour all bosses run back to their spawn point once a reset is called expected blizzlike behaviour there are some bosses that do a simple reset and other more complex bosses do a phased reset and then respawn seems there is a pattern for which bosses do a simple or phased reset as i will describe below what bosses should run back to the spawn point at reset simple bosses no environmental changes no transformations no transportations from to high ground examples sapphiron patchwerk what bosses should disappear and respawn at reset transformations like professor putricide in icecrown citadel special teleports to a balcony or other movements to large areas like gothik in naxxramas mechanics that change the environment like mimiron in ulduar all other bosses with transformations high ground teleports and with mechanics that change environment should disappear phasing and respawn after some time amount of time is unknown source all sources from retail simple reset phased reset this is from forge of souls so it also applies to dungeons this is from pit of saron on scourgelord tyrannus steps to reproduce the problem engage a boss reset by dying or using gm on boss will run back to the spawn point extra notes no response ac rev hash commit operating system windows custom changes or modules none | 1 |
284,580 | 8,744,038,513 | IssuesEvent | 2018-12-12 20:59:33 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | Popup info appears too high to see on starter camp | Medium Priority | 

Lets make this really easy to spot since the player isnt familiar with the game at this point.
| 1.0 | Popup info appears too high to see on starter camp - 

Lets make this really easy to spot since the player isnt familiar with the game at this point.
| priority | popup info appears too high to see on starter camp lets make this really easy to spot since the player isnt familiar with the game at this point | 1 |
270,173 | 8,453,023,113 | IssuesEvent | 2018-10-20 11:20:03 | jredfox/evilnotchlib | https://api.github.com/repos/jredfox/evilnotchlib | closed | Pick Block Event TE issues | bug fixed next release good first issue priority=medium | Version: snapshot 76
Issue: the pickblock event data gets it on client side only rather then from the server this is invalid behavior that should have been removed in the integrated mc server update and especially 1.8
Steps to reproduce:
place spawener with multiple indexes go out of chunks far away
go back middle click spawner with silkspawners notice no spawnpotentials
| 1.0 | Pick Block Event TE issues - Version: snapshot 76
Issue: the pickblock event data gets it on client side only rather then from the server this is invalid behavior that should have been removed in the integrated mc server update and especially 1.8
Steps to reproduce:
place spawener with multiple indexes go out of chunks far away
go back middle click spawner with silkspawners notice no spawnpotentials
| priority | pick block event te issues version snapshot issue the pickblock event data gets it on client side only rather then from the server this is invalid behavior that should have been removed in the integrated mc server update and especially steps to reproduce place spawener with multiple indexes go out of chunks far away go back middle click spawner with silkspawners notice no spawnpotentials | 1 |
127,217 | 5,026,192,842 | IssuesEvent | 2016-12-15 11:42:46 | GafferHQ/gaffer | https://api.github.com/repos/GafferHQ/gaffer | closed | Ensure all reader nodes have a 'Reload' button | component-image component-renderman component-scene component-ui priority-medium type-enhancement | - [ ] ImageReader
- [ ] SceneReader
- [ ] ObjectReader
- [ ] RenderManShader
- [ ] ?
| 1.0 | Ensure all reader nodes have a 'Reload' button - - [ ] ImageReader
- [ ] SceneReader
- [ ] ObjectReader
- [ ] RenderManShader
- [ ] ?
| priority | ensure all reader nodes have a reload button imagereader scenereader objectreader rendermanshader | 1 |
663,660 | 22,201,098,558 | IssuesEvent | 2022-06-07 11:16:34 | COS301-SE-2022/Vote-Vault | https://api.github.com/repos/COS301-SE-2022/Vote-Vault | closed | 💄 (Website) Edit the sizing properties of logo in header | enhancement scope:ui priority:medium | Fixed the width and height of the image logo in the heade navbar | 1.0 | 💄 (Website) Edit the sizing properties of logo in header - Fixed the width and height of the image logo in the heade navbar | priority | 💄 website edit the sizing properties of logo in header fixed the width and height of the image logo in the heade navbar | 1 |
815,679 | 30,567,231,956 | IssuesEvent | 2023-07-20 18:46:21 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] YugabyteDB locks nonexistent rows where PostgreSQL doesn't | kind/bug area/ysql priority/medium | Jira Link: [DB-7309](https://yugabyte.atlassian.net/browse/DB-7309)
### Description
Setup:
```
CREATE TABLE t (k int PRIMARY KEY);
```
Session 1:
```
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT * FROM t WHERE k=1 FOR UPDATE;
```
Session 2:
```
BEGIN ISOLATION LEVEL SERIALIZABLE;
INSERT INTO t VALUES (1);
```
PG behavior: The insert completes, both transactions can commit.
YB behavior: The insert blocks.
Due to #18316, this currently happens in other isolation modes as well, but I believe the behavior in isolation level SERIALIZABLE was preexisting.
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-7309]: https://yugabyte.atlassian.net/browse/DB-7309?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] YugabyteDB locks nonexistent rows where PostgreSQL doesn't - Jira Link: [DB-7309](https://yugabyte.atlassian.net/browse/DB-7309)
### Description
Setup:
```
CREATE TABLE t (k int PRIMARY KEY);
```
Session 1:
```
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT * FROM t WHERE k=1 FOR UPDATE;
```
Session 2:
```
BEGIN ISOLATION LEVEL SERIALIZABLE;
INSERT INTO t VALUES (1);
```
PG behavior: The insert completes, both transactions can commit.
YB behavior: The insert blocks.
Due to #18316, this currently happens in other isolation modes as well, but I believe the behavior in isolation level SERIALIZABLE was preexisting.
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-7309]: https://yugabyte.atlassian.net/browse/DB-7309?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | yugabytedb locks nonexistent rows where postgresql doesn t jira link description setup create table t k int primary key session begin isolation level serializable select from t where k for update session begin isolation level serializable insert into t values pg behavior the insert completes both transactions can commit yb behavior the insert blocks due to this currently happens in other isolation modes as well but i believe the behavior in isolation level serializable was preexisting warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
782,963 | 27,512,552,099 | IssuesEvent | 2023-03-06 09:49:31 | scaleway/terraform-provider-scaleway | https://api.github.com/repos/scaleway/terraform-provider-scaleway | closed | add data source for scaleway_lb_ips | enhancement load-balancer priority:medium | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
- List IPs
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* scaleway_XXXXX
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
--->
* #0000
| 1.0 | add data source for scaleway_lb_ips - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
- List IPs
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* scaleway_XXXXX
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
--->
* #0000
| priority | add data source for scaleway lb ips community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description list ips new or affected resource s scaleway xxxxx potential terraform configuration hcl copy paste your terraform configurations here for large terraform configs please use a service like dropbox and share a link to the zip file for security you can also encrypt the files using our gpg public key references information about referencing github issues | 1 |
752,380 | 26,283,625,785 | IssuesEvent | 2023-01-07 15:36:30 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Overriding `commitMessageTopic` fails for github-action | priority-3-medium type:docs status:in-progress | ### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
Given the following config:
```json
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"baseBranches": ["main"],
"extends": [
":separateMajorReleases",
":rebaseStalePrs",
":disableRateLimiting",
":docker",
":semanticCommits"
],
"enabledManagers": ["github-actions"],
"commitMessageAction": "",
"commitMessageTopic": "{{depName}}"
}
```
the `commitMessageTopic` still seems to contain "action" when a PR is created:
> chore(deps): jbergstroem/hadolint-gh-action action to v1.10.0
Link to PR: https://github.com/jbergstroem/renovatebot-gh-action/pull/1
### Relevant debug logs
After reverting my custom `commitMessageExtra`, the debug logs show that renovatebot applies the extra "action" in `commitMessageTopic`:
Renovate job #881842651, highlight:
```json5
INFO: PR created(branch="renovate/all-minor-patch")
{
"baseBranch": "main",
"pr": 1,
"prTitle": "chore(deps): jbergstroem/hadolint-gh-action action v1.10.0"
}
```
### Have you created a minimal reproduction repository?
Config, github action and PR is available here: https://github.com/jbergstroem/renovatebot-gh-action | 1.0 | Overriding `commitMessageTopic` fails for github-action - ### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
Given the following config:
```json
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"baseBranches": ["main"],
"extends": [
":separateMajorReleases",
":rebaseStalePrs",
":disableRateLimiting",
":docker",
":semanticCommits"
],
"enabledManagers": ["github-actions"],
"commitMessageAction": "",
"commitMessageTopic": "{{depName}}"
}
```
the `commitMessageTopic` still seems to contain "action" when a PR is created:
> chore(deps): jbergstroem/hadolint-gh-action action to v1.10.0
Link to PR: https://github.com/jbergstroem/renovatebot-gh-action/pull/1
### Relevant debug logs
After reverting my custom `commitMessageExtra`, the debug logs show that renovatebot applies the extra "action" in `commitMessageTopic`:
Renovate job #881842651, highlight:
```json5
INFO: PR created(branch="renovate/all-minor-patch")
{
"baseBranch": "main",
"pr": 1,
"prTitle": "chore(deps): jbergstroem/hadolint-gh-action action v1.10.0"
}
```
### Have you created a minimal reproduction repository?
Config, github action and PR is available here: https://github.com/jbergstroem/renovatebot-gh-action | priority | overriding commitmessagetopic fails for github action how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response if you re self hosting renovate select which platform you are using no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug given the following config json schema basebranches extends separatemajorreleases rebasestaleprs disableratelimiting docker semanticcommits enabledmanagers commitmessageaction commitmessagetopic depname the commitmessagetopic still seems to contain action when a pr is created chore deps jbergstroem hadolint gh action action to link to pr relevant debug logs after reverting my custom commitmessageextra the debug logs show that renovatebot applies the extra action in commitmessagetopic renovate job highlight info pr created branch renovate all minor patch basebranch main pr prtitle chore deps jbergstroem hadolint gh action action have you created a minimal reproduction repository config github action and pr is available here | 1 |
281,000 | 8,689,281,623 | IssuesEvent | 2018-12-03 18:15:56 | minio/minio | https://api.github.com/repos/minio/minio | closed | Minio in docker does not generate a config.json file in the desired location | community priority: medium working as intended | I am having a problem making minio generates a config.json file outside the docker container.
it does generate the JSON file when using the local server, however, when using b2/aws gateway it fails to generate the JSON file.
also, I can not find it in docker itself under '/root/.minio' directory.
## Expected Behavior
Be able to have access to config.json outside the docker container.
## Current Behavior
it does generate the config file when configuring minio for the local server, but it won't do same for the b2/aws gateways.
## Steps to Reproduce (for bugs)
using docker docker-compose file:
```
minio2:
container_name: minio-b2
image: minio/minio
restart: on-failure
volumes:
- e:/Docker/data/minio/b2:/data
- e:/Docker/data/minio/b2/config:/root/.minio
environment:
MINIO_ACCESS_KEY: access-key
MINIO_SECRET_KEY: secret-key
MINIO_HTTP_TRACE: /data/minio.log
ports:
- 9001:9000
command: --config-dir /data/config gateway b2
```
## Context
i am trying to centerlize configuration file, so multiple minio instance can have access. to the same configuration file.
Thanks,
| 1.0 | Minio in docker does not generate a config.json file in the desired location - I am having a problem making minio generates a config.json file outside the docker container.
it does generate the JSON file when using the local server, however, when using b2/aws gateway it fails to generate the JSON file.
also, I can not find it in docker itself under '/root/.minio' directory.
## Expected Behavior
Be able to have access to config.json outside the docker container.
## Current Behavior
it does generate the config file when configuring minio for the local server, but it won't do same for the b2/aws gateways.
## Steps to Reproduce (for bugs)
using docker docker-compose file:
```
minio2:
container_name: minio-b2
image: minio/minio
restart: on-failure
volumes:
- e:/Docker/data/minio/b2:/data
- e:/Docker/data/minio/b2/config:/root/.minio
environment:
MINIO_ACCESS_KEY: access-key
MINIO_SECRET_KEY: secret-key
MINIO_HTTP_TRACE: /data/minio.log
ports:
- 9001:9000
command: --config-dir /data/config gateway b2
```
## Context
i am trying to centerlize configuration file, so multiple minio instance can have access. to the same configuration file.
Thanks,
| priority | minio in docker does not generate a config json file in the desired location i am having a problem making minio generates a config json file outside the docker container it does generate the json file when using the local server however when using aws gateway it fails to generate the json file also i can not find it in docker itself under root minio directory expected behavior be able to have access to config json outside the docker container current behavior it does generate the config file when configuring minio for the local server but it won t do same for the aws gateways steps to reproduce for bugs using docker docker compose file container name minio image minio minio restart on failure volumes e docker data minio data e docker data minio config root minio environment minio access key access key minio secret key secret key minio http trace data minio log ports command config dir data config gateway context i am trying to centerlize configuration file so multiple minio instance can have access to the same configuration file thanks | 1 |
22,917 | 2,651,354,105 | IssuesEvent | 2015-03-16 10:50:28 | nfprojects/nfengine | https://api.github.com/repos/nfprojects/nfengine | closed | Prepare nfCommon for Linux port | low priority medium new feature | This will be the first step in porting the whole engine.
* Write CMake files.
* Run compilation.
* Make sure tests pass
* If minor fixes are needed in multi-platform parts of the code, do them
* Extract parts of platform-dependent code (for example WinAPI vs. X usage for managing windows) to separate cpp file and configure build.
* DO NOT add Linux-specific code yet. This will be done in separate tasks to avoid too huge pull requests. | 1.0 | Prepare nfCommon for Linux port - This will be the first step in porting the whole engine.
* Write CMake files.
* Run compilation.
* Make sure tests pass
* If minor fixes are needed in multi-platform parts of the code, do them
* Extract parts of platform-dependent code (for example WinAPI vs. X usage for managing windows) to separate cpp file and configure build.
* DO NOT add Linux-specific code yet. This will be done in separate tasks to avoid too huge pull requests. | priority | prepare nfcommon for linux port this will be the first step in porting the whole engine write cmake files run compilation make sure tests pass if minor fixes are needed in multi platform parts of the code do them extract parts of platform dependent code for example winapi vs x usage for managing windows to separate cpp file and configure build do not add linux specific code yet this will be done in separate tasks to avoid too huge pull requests | 1 |
99,140 | 4,048,280,179 | IssuesEvent | 2016-05-23 09:41:59 | OCHA-DAP/liverpool16 | https://api.github.com/repos/OCHA-DAP/liverpool16 | closed | Using the config selector on Android devices opens the keyboard which messes with the page | bug Medium Priority Mobile | Check screenshot @danmihaila

| 1.0 | Using the config selector on Android devices opens the keyboard which messes with the page - Check screenshot @danmihaila

| priority | using the config selector on android devices opens the keyboard which messes with the page check screenshot danmihaila | 1 |
773,657 | 27,165,297,711 | IssuesEvent | 2023-02-17 14:58:43 | episphere/connectApp | https://api.github.com/repos/episphere/connectApp | closed | Edit language on last PWA electronic consent screen | Medium Priority MVP E consent | Update the language on the final screen of the electronic consent modules for review. Edits tracked here:
https://nih.box.com/s/awqoyltghbolwm5ky5rerdqneu8yzo2y
The edits are restricted to the electronic signature portion of the final screen under the heading "Informed Consent." Please loop in @cunnaneaq and @depietrodeanna when ready for review.
@Davinkjohnson I will leave it to you to assign appropriate team member to this issue. | 1.0 | Edit language on last PWA electronic consent screen - Update the language on the final screen of the electronic consent modules for review. Edits tracked here:
https://nih.box.com/s/awqoyltghbolwm5ky5rerdqneu8yzo2y
The edits are restricted to the electronic signature portion of the final screen under the heading "Informed Consent." Please loop in @cunnaneaq and @depietrodeanna when ready for review.
@Davinkjohnson I will leave it to you to assign appropriate team member to this issue. | priority | edit language on last pwa electronic consent screen update the language on the final screen of the electronic consent modules for review edits tracked here the edits are restricted to the electronic signature portion of the final screen under the heading informed consent please loop in cunnaneaq and depietrodeanna when ready for review davinkjohnson i will leave it to you to assign appropriate team member to this issue | 1 |
576,427 | 17,086,884,650 | IssuesEvent | 2021-07-08 12:58:59 | netdata/netdata | https://api.github.com/repos/netdata/netdata | closed | dismiss alarms from the dashboard | area/health area/web feature request priority/medium | I was setting up a new dockerized project on my server yesterday. The `docker-compose.yml` file contained a definition of an internal network that was shared by a few local services. Because I was running `docker-compose up` and `docker-compose down` until I felt satisfied with the result, this network was killed and recreated several times.
It seems that netdata picked all these temporary networks and is now warning me about them being unavailable for some time:

This list goes on and on.
I'm wondering if there could be any way to distinguish this case (which is obviously not something to worry about) with some real network issues. Shame that I can't suggest any solution because I'm not competent enough in how monitoring works and how docker networks are seen by the OS or netdata.
Feel free to join the discussion if you have any thoughts.
| 1.0 | dismiss alarms from the dashboard - I was setting up a new dockerized project on my server yesterday. The `docker-compose.yml` file contained a definition of an internal network that was shared by a few local services. Because I was running `docker-compose up` and `docker-compose down` until I felt satisfied with the result, this network was killed and recreated several times.
It seems that netdata picked all these temporary networks and is now warning me about them being unavailable for some time:

This list goes on and on.
I'm wondering if there could be any way to distinguish this case (which is obviously not something to worry about) with some real network issues. Shame that I can't suggest any solution because I'm not competent enough in how monitoring works and how docker networks are seen by the OS or netdata.
Feel free to join the discussion if you have any thoughts.
| priority | dismiss alarms from the dashboard i was setting up a new dockerized project on my server yesterday the docker compose yml file contained a definition of an internal network that was shared by a few local services because i was running docker compose up and docker compose down until i felt satisfied with the result this network was killed and recreated several times it seems that netdata picked all these temporary networks and is now warning me about them being unavailable for some time this list goes on and on i m wondering if there could be any way to distinguish this case which is obviously not something to worry about with some real network issues shame that i can t suggest any solution because i m not competent enough in how monitoring works and how docker networks are seen by the os or netdata feel free to join the discussion if you have any thoughts | 1 |
753,469 | 26,347,752,991 | IssuesEvent | 2023-01-11 00:18:19 | belav/csharpier | https://api.github.com/repos/belav/csharpier | closed | Consider always putting generic type constraints onto a new line | area:formatting priority:medium | According to [this stylecop rule](https://github.com/DotNetAnalyzers/StyleCopAnalyzers/blob/master/documentation/SA1127.md) generic type constraints should always be on a new line.
I think I am on board with modifying this
```c#
public static T CreatePipelineResult<T>(
T result,
ResultCode resultCode,
SubCode subCode,
string message = null
) where T : PipeResultBase
{
// vs
public static T CreatePipelineResult<T>(
T result,
ResultCode resultCode,
SubCode subCode,
string message = null
)
where T : PipeResultBase
{
```
But it makes less sense in cases like this
```c#
public int Foo<T>(T obj) where T : U;
// vs
public int Foo<T>(T obj)
where T : U;
```
From the files in csharpier-repos, only 39% of generic type constraints are on a new line. But that doesn't take into account super short examples vs long ones.
| 1.0 | Consider always putting generic type constraints onto a new line - According to [this stylecop rule](https://github.com/DotNetAnalyzers/StyleCopAnalyzers/blob/master/documentation/SA1127.md) generic type constraints should always be on a new line.
I think I am on board with modifying this
```c#
public static T CreatePipelineResult<T>(
T result,
ResultCode resultCode,
SubCode subCode,
string message = null
) where T : PipeResultBase
{
// vs
public static T CreatePipelineResult<T>(
T result,
ResultCode resultCode,
SubCode subCode,
string message = null
)
where T : PipeResultBase
{
```
But it makes less sense in cases like this
```c#
public int Foo<T>(T obj) where T : U;
// vs
public int Foo<T>(T obj)
where T : U;
```
From the files in csharpier-repos, only 39% of generic type constraints are on a new line. But that doesn't take into account super short examples vs long ones.
| priority | consider always putting generic type constraints onto a new line according to generic type constraints should always be on a new line i think i am on board with modifying this c public static t createpipelineresult t result resultcode resultcode subcode subcode string message null where t piperesultbase vs public static t createpipelineresult t result resultcode resultcode subcode subcode string message null where t piperesultbase but it makes less sense in cases like this c public int foo t obj where t u vs public int foo t obj where t u from the files in csharpier repos only of generic type constraints are on a new line but that doesn t take into account super short examples vs long ones | 1 |
67,608 | 3,275,472,653 | IssuesEvent | 2015-10-26 15:42:25 | nikcross/open-forum | https://api.github.com/repos/nikcross/open-forum | closed | Refactor Wiki code into plugable modules | auto-migrated Priority-Medium Type-Enhancement | ```
Break out:
* Search
* Authentication
* File System
into separate modules defined by interface
Have them under Jar Manager control
```
Original issue reported on code.google.com by `nicholas...@gmail.com` on 14 May 2008 at 10:32 | 1.0 | Refactor Wiki code into plugable modules - ```
Break out:
* Search
* Authentication
* File System
into separate modules defined by interface
Have them under Jar Manager control
```
Original issue reported on code.google.com by `nicholas...@gmail.com` on 14 May 2008 at 10:32 | priority | refactor wiki code into plugable modules break out search authentication file system into separate modules defined by interface have them under jar manager control original issue reported on code google com by nicholas gmail com on may at | 1 |
485,871 | 14,000,705,944 | IssuesEvent | 2020-10-28 12:42:52 | AY2021S1-CS2103T-T12-2/tp | https://api.github.com/repos/AY2021S1-CS2103T-T12-2/tp | closed | As a store manager, I want to know who are working on the next day | priority.Medium type.Story | ... so that I can make plan for next day beforehand. | 1.0 | As a store manager, I want to know who are working on the next day - ... so that I can make plan for next day beforehand. | priority | as a store manager i want to know who are working on the next day so that i can make plan for next day beforehand | 1 |
701,447 | 24,098,434,988 | IssuesEvent | 2022-09-19 21:07:45 | radical-cybertools/radical.pilot | https://api.github.com/repos/radical-cybertools/radical.pilot | closed | Feature Request: run in an interactive job | type:feature topic:api topic:resource priority:medium comp:agent:bootstrapper comp:agent topic:configuration | The agent will always assume it's running with the local resource manager - but we should be able to configure other resource managers. However, those will not see the batch system's environment - that would need to be communicated from client to agent.
See radical-collaboration/hpc-workflows/issues/154 | 1.0 | Feature Request: run in an interactive job - The agent will always assume it's running with the local resource manager - but we should be able to configure other resource managers. However, those will not see the batch system's environment - that would need to be communicated from client to agent.
See radical-collaboration/hpc-workflows/issues/154 | priority | feature request run in an interactive job the agent will always assume it s running with the local resource manager but we should be able to configure other resource managers however those will not see the batch system s environment that would need to be communicated from client to agent see radical collaboration hpc workflows issues | 1 |
200,231 | 7,001,664,798 | IssuesEvent | 2017-12-18 11:04:52 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | Normalize SunPy sample data | Effort Medium Feature Request Hacktoberfest Package Novice Priority Medium Refactoring | The current sample data for SunPy (e.g. sunpy.AIA_171_IMAGE, sunpy.RHESSI_IMAGE, etc) were chosen somewhat arbitrarily and have do not share anything in common.
I think it would be useful to take a more systematic approach to selecting sample data to include with SunPy.
Things to consider:
1. Choose data for same time period to allow for meaningful overlay and combination examples.
2. Decide which data sets should be included -- unless we decide we should have one sample image for each type of data supported, we should probably decide before-hand which types of data to include. Probably one sample image for each _type_ of data (disk image, coronagraph, single light-curve, multi-column lightcurve, etc) would be sufficient.
3. Since users will have to download all of these, make sure the files are relatively small.
| 1.0 | Normalize SunPy sample data - The current sample data for SunPy (e.g. sunpy.AIA_171_IMAGE, sunpy.RHESSI_IMAGE, etc) were chosen somewhat arbitrarily and have do not share anything in common.
I think it would be useful to take a more systematic approach to selecting sample data to include with SunPy.
Things to consider:
1. Choose data for same time period to allow for meaningful overlay and combination examples.
2. Decide which data sets should be included -- unless we decide we should have one sample image for each type of data supported, we should probably decide before-hand which types of data to include. Probably one sample image for each _type_ of data (disk image, coronagraph, single light-curve, multi-column lightcurve, etc) would be sufficient.
3. Since users will have to download all of these, make sure the files are relatively small.
| priority | normalize sunpy sample data the current sample data for sunpy e g sunpy aia image sunpy rhessi image etc were chosen somewhat arbitrarily and have do not share anything in common i think it would be useful to take a more systematic approach to selecting sample data to include with sunpy things to consider choose data for same time period to allow for meaningful overlay and combination examples decide which data sets should be included unless we decide we should have one sample image for each type of data supported we should probably decide before hand which types of data to include probably one sample image for each type of data disk image coronagraph single light curve multi column lightcurve etc would be sufficient since users will have to download all of these make sure the files are relatively small | 1 |
502,082 | 14,539,819,107 | IssuesEvent | 2020-12-15 12:26:48 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | closed | Line starting with single space followed by `#` is not considered comment | beta 2 bug priority: medium | Hello ,
We're in process of migrating from Robot3.0.4 to 3.2.1, and found the following issues, I did search through the release notes of 3.1 and 3.2.1 I couldn't find any related tickets. So thought report them here and get some insights.
resources files with keywords in it had comments starting '#' character which are being reported as ERRORs with 3.2.1 while they were working fine with 3.0.4.
Error in resource file ‘/abcd/lib_resources.robot': Creating keyword ' ############################' failed: Keyword with same name defined multiple times
Error in resource file /abcd/lib_resources.robot': Creating keyword ' #' failed: Keyword with same name defined multiple times
Duplicate keyword errors!, these were working without errors earlier!, How do I make it to resolve to BuiltIn. I verified earlier logs and newer logs, I noticed it used to get resolved for first findings i.e BuiltIn.Sleep.
Multiple keywords with name 'Sleep' found. Give the full name of the keyword you want to use:
BuiltIn.Sleep
mykeywordsLib.Sleep
Any help to address the above would be appreciated. Thanks!
```
*** Keywords ***
###########################
Connect To Device
###########################
[Documentation]
... PURPOSE :
... This keyword connects to device….
Disconnect
[Documentation]
... PURPOSE :
... This keyword disconnects…
# Disconnect keyword
DISCONNECT TO DEVICE <some args….>
``` | 1.0 | Line starting with single space followed by `#` is not considered comment - Hello ,
We're in process of migrating from Robot3.0.4 to 3.2.1, and found the following issues, I did search through the release notes of 3.1 and 3.2.1 I couldn't find any related tickets. So thought report them here and get some insights.
resources files with keywords in it had comments starting '#' character which are being reported as ERRORs with 3.2.1 while they were working fine with 3.0.4.
Error in resource file ‘/abcd/lib_resources.robot': Creating keyword ' ############################' failed: Keyword with same name defined multiple times
Error in resource file /abcd/lib_resources.robot': Creating keyword ' #' failed: Keyword with same name defined multiple times
Duplicate keyword errors!, these were working without errors earlier!, How do I make it to resolve to BuiltIn. I verified earlier logs and newer logs, I noticed it used to get resolved for first findings i.e BuiltIn.Sleep.
Multiple keywords with name 'Sleep' found. Give the full name of the keyword you want to use:
BuiltIn.Sleep
mykeywordsLib.Sleep
Any help to address the above would be appreciated. Thanks!
```
*** Keywords ***
###########################
Connect To Device
###########################
[Documentation]
... PURPOSE :
... This keyword connects to device….
Disconnect
[Documentation]
... PURPOSE :
... This keyword disconnects…
# Disconnect keyword
DISCONNECT TO DEVICE <some args….>
``` | priority | line starting with single space followed by is not considered comment hello we re in process of migrating from to and found the following issues i did search through the release notes of and i couldn t find any related tickets so thought report them here and get some insights resources files with keywords in it had comments starting character which are being reported as errors with while they were working fine with error in resource file ‘ abcd lib resources robot creating keyword failed keyword with same name defined multiple times error in resource file abcd lib resources robot creating keyword failed keyword with same name defined multiple times duplicate keyword errors these were working without errors earlier how do i make it to resolve to builtin i verified earlier logs and newer logs i noticed it used to get resolved for first findings i e builtin sleep multiple keywords with name sleep found give the full name of the keyword you want to use builtin sleep mykeywordslib sleep any help to address the above would be appreciated thanks keywords connect to device purpose this keyword connects to device… disconnect purpose this keyword disconnects… disconnect keyword disconnect to device | 1 |
430,163 | 12,440,839,895 | IssuesEvent | 2020-05-26 12:43:22 | BgeeDB/bgee_apps | https://api.github.com/repos/BgeeDB/bgee_apps | opened | Show information about confidence in ranks | priority: medium | In GitLab by @fbastian on Jul 1, 2016, 14:45
For conditions with only ESTs and/or in situ data, a high rank is not likely to mean that the gene is lowly expressed, but most likely that we had no granularity enough in the data to appropriately rank the gene.
We should find a way to convey this information to the user.
Maybe put in gray the scores > 15000 with support only from ESTs and/or in situ?
@marcrr | 1.0 | Show information about confidence in ranks - In GitLab by @fbastian on Jul 1, 2016, 14:45
For conditions with only ESTs and/or in situ data, a high rank is not likely to mean that the gene is lowly expressed, but most likely that we had no granularity enough in the data to appropriately rank the gene.
We should find a way to convey this information to the user.
Maybe put in gray the scores > 15000 with support only from ESTs and/or in situ?
@marcrr | priority | show information about confidence in ranks in gitlab by fbastian on jul for conditions with only ests and or in situ data a high rank is not likely to mean that the gene is lowly expressed but most likely that we had no granularity enough in the data to appropriately rank the gene we should find a way to convey this information to the user maybe put in gray the scores with support only from ests and or in situ marcrr | 1 |
499,196 | 14,442,833,672 | IssuesEvent | 2020-12-07 18:44:04 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | UnconnectedWaySnapper sometimes chooses a poor way node insert index | Category: Algorithms Priority: Medium Status: Ready For Review Type: Bug | This causes malformed ways when road snapping is used with Diff Conflate.
See output of `ServiceDiffNetworkRoadSnapTest`. Snaps to fix:
* way -206 snapped to way 31 via node -301 - snap insert index probably should be 2 instead of 3
* way -72 snapped to way -289 via node -225
* way -67 snapped to way -169 via node -193
Fixes for this will also affect `DiffConflateCmdTest` and `ServiceDiffUnifyingRoadSnapTest`. | 1.0 | UnconnectedWaySnapper sometimes chooses a poor way node insert index - This causes malformed ways when road snapping is used with Diff Conflate.
See output of `ServiceDiffNetworkRoadSnapTest`. Snaps to fix:
* way -206 snapped to way 31 via node -301 - snap insert index probably should be 2 instead of 3
* way -72 snapped to way -289 via node -225
* way -67 snapped to way -169 via node -193
Fixes for this will also affect `DiffConflateCmdTest` and `ServiceDiffUnifyingRoadSnapTest`. | priority | unconnectedwaysnapper sometimes chooses a poor way node insert index this causes malformed ways when road snapping is used with diff conflate see output of servicediffnetworkroadsnaptest snaps to fix way snapped to way via node snap insert index probably should be instead of way snapped to way via node way snapped to way via node fixes for this will also affect diffconflatecmdtest and servicediffunifyingroadsnaptest | 1 |
430,558 | 12,462,949,726 | IssuesEvent | 2020-05-28 09:45:37 | RevivalPMMP/PureEntitiesX | https://api.github.com/repos/RevivalPMMP/PureEntitiesX | closed | Monsters crossing walls, sheeps flying around | Category: Bug Priority: Medium Status: Confirmed | <!-- REQUIRED INFORMATION - Labels should be self explainatory, failure to fill out this section will get your issued closed with Resolution status of "Invalid". -->
## Required Information
__PocketMine-MP Version:__ 1.7dev-318
__Plugin Version:__ 0.2.8-3.alpha9
Where you got the plugin: Cloned from github
<!-- OPTIONAL INFORMATION - use this section for posting crash dumps, backtraces or other files(please use code markdown!) -->
## Optional Information
__PHP version:__ PHP 7.2.0RC6 (cli)
__Other Installed Plugins:__ none
__OS Version:__ aarch64 linux.suse 4.4.92-18.36-default on Raspberry 3
<!-- STEPS TO REPRODUCE - Don't fill this out if the problem occurs immediately after install. -->
## Steps to reproduce the issue.
1. Play the game, see monsters crossing walls and animals flying around. Also on sea.
2.
...
If you can give me initial orientation (WalkingEntity ?), I could maybe provide a fix as pull request.
Thanks, regards,
Rainer Feike | 1.0 | Monsters crossing walls, sheeps flying around - <!-- REQUIRED INFORMATION - Labels should be self explainatory, failure to fill out this section will get your issued closed with Resolution status of "Invalid". -->
## Required Information
__PocketMine-MP Version:__ 1.7dev-318
__Plugin Version:__ 0.2.8-3.alpha9
Where you got the plugin: Cloned from github
<!-- OPTIONAL INFORMATION - use this section for posting crash dumps, backtraces or other files(please use code markdown!) -->
## Optional Information
__PHP version:__ PHP 7.2.0RC6 (cli)
__Other Installed Plugins:__ none
__OS Version:__ aarch64 linux.suse 4.4.92-18.36-default on Raspberry 3
<!-- STEPS TO REPRODUCE - Don't fill this out if the problem occurs immediately after install. -->
## Steps to reproduce the issue.
1. Play the game, see monsters crossing walls and animals flying around. Also on sea.
2.
...
If you can give me initial orientation (WalkingEntity ?), I could maybe provide a fix as pull request.
Thanks, regards,
Rainer Feike | priority | monsters crossing walls sheeps flying around required information pocketmine mp version plugin version where you got the plugin cloned from github optional information php version php cli other installed plugins none os version linux suse default on raspberry steps to reproduce the issue play the game see monsters crossing walls and animals flying around also on sea if you can give me initial orientation walkingentity i could maybe provide a fix as pull request thanks regards rainer feike | 1 |
253,465 | 8,056,504,523 | IssuesEvent | 2018-08-02 12:56:11 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | Site:WaterMainsTemperature Default | IDDChange NewFeature Priority2 S2 - Medium | ## Issue overview
Based on the Engineering Reference, 'If there is no Site:WaterMainsTemperature object in the input file, a default constant value of 10 C is assumed.' I have found that the actual calculated profile for the water mains temperature is significantly higher for the majority of climates, resulting in an overestimate of domestic hot water use if no Site:WaterMainsTemperature object is in the input file. The graph below shows the water mains temperature profile for the Chicago-O’Hare TMY2 weather file, which many of the Reference Buildings and Example Files are based on. I recommend adjusting the default value to somewhere closer to 12 C, or the average water mains temperature based on the profile for the Chicago-O’Hare TMY2 weather file, when no Site:WaterMainsTemperature object is in the input file to better reflect the majority of climates.

| 1.0 | Site:WaterMainsTemperature Default - ## Issue overview
Based on the Engineering Reference, 'If there is no Site:WaterMainsTemperature object in the input file, a default constant value of 10 C is assumed.' I have found that the actual calculated profile for the water mains temperature is significantly higher for the majority of climates, resulting in an overestimate of domestic hot water use if no Site:WaterMainsTemperature object is in the input file. The graph below shows the water mains temperature profile for the Chicago-O’Hare TMY2 weather file, which many of the Reference Buildings and Example Files are based on. I recommend adjusting the default value to somewhere closer to 12 C, or the average water mains temperature based on the profile for the Chicago-O’Hare TMY2 weather file, when no Site:WaterMainsTemperature object is in the input file to better reflect the majority of climates.

| priority | site watermainstemperature default issue overview based on the engineering reference if there is no site watermainstemperature object in the input file a default constant value of c is assumed i have found that the actual calculated profile for the water mains temperature is significantly higher for the majority of climates resulting in an overestimate of domestic hot water use if no site watermainstemperature object is in the input file the graph below shows the water mains temperature profile for the chicago o’hare weather file which many of the reference buildings and example files are based on i recommend adjusting the default value to somewhere closer to c or the average water mains temperature based on the profile for the chicago o’hare weather file when no site watermainstemperature object is in the input file to better reflect the majority of climates | 1 |
497,407 | 14,369,641,490 | IssuesEvent | 2020-12-01 10:01:49 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.2 staging-1860] Tooltips are closed one by one and take a long time | Category: UI Priority: Medium Type: Regression | Step to reproduce:
- open a lot of tooltips:

- mouseover a free space:
https://drive.google.com/file/d/1kNEwNnayn1vSCC8Ix3geydpsdOg7t7DB/view?usp=sharing | 1.0 | [0.9.2 staging-1860] Tooltips are closed one by one and take a long time - Step to reproduce:
- open a lot of tooltips:

- mouseover a free space:
https://drive.google.com/file/d/1kNEwNnayn1vSCC8Ix3geydpsdOg7t7DB/view?usp=sharing | priority | tooltips are closed one by one and take a long time step to reproduce open a lot of tooltips mouseover a free space | 1 |
718,473 | 24,718,355,836 | IssuesEvent | 2022-10-20 08:47:53 | SkriptLang/Skript | https://api.github.com/repos/SkriptLang/Skript | closed | Fix default variables for 2.7 | bug priority: medium variables core task | ### Description
https://github.com/SkriptLang/Skript/pull/4566/
This needs to be ported over to 2.7 after merge
| 1.0 | Fix default variables for 2.7 - ### Description
https://github.com/SkriptLang/Skript/pull/4566/
This needs to be ported over to 2.7 after merge
| priority | fix default variables for description this needs to be ported over to after merge | 1 |
566,836 | 16,831,848,880 | IssuesEvent | 2021-06-18 06:35:34 | jqwidgets/jQWidgets | https://api.github.com/repos/jqwidgets/jQWidgets | closed | jqxGrid - using of the "cellsrenderer" callback broke the formatting | medium priority | Also, if return the _"**defaulthtml**" argument_ it does not look aligned fine.
**Example:**
http://jsfiddle.net/tw503hbd/
**Topic:**
https://www.jqwidgets.com/community/topic/jqxgrid-with-property-autorowheight-and-cellsrenderer/ | 1.0 | jqxGrid - using of the "cellsrenderer" callback broke the formatting - Also, if return the _"**defaulthtml**" argument_ it does not look aligned fine.
**Example:**
http://jsfiddle.net/tw503hbd/
**Topic:**
https://www.jqwidgets.com/community/topic/jqxgrid-with-property-autorowheight-and-cellsrenderer/ | priority | jqxgrid using of the cellsrenderer callback broke the formatting also if return the defaulthtml argument it does not look aligned fine example topic | 1 |
209,969 | 7,181,853,920 | IssuesEvent | 2018-02-01 07:28:57 | dkpro/dkpro-tc | https://api.github.com/repos/dkpro/dkpro-tc | closed | Adapter and report for statistics evaluation package | Priority-Medium bug | Originally reported on Google Code with ID 224
```
This issue was created by revision r1296.
Created reports, adapter and a demo (20newsgroup)
```
Reported by `daxenberger.j` on 2014-12-10 21:56:03
| 1.0 | Adapter and report for statistics evaluation package - Originally reported on Google Code with ID 224
```
This issue was created by revision r1296.
Created reports, adapter and a demo (20newsgroup)
```
Reported by `daxenberger.j` on 2014-12-10 21:56:03
| priority | adapter and report for statistics evaluation package originally reported on google code with id this issue was created by revision created reports adapter and a demo reported by daxenberger j on | 1 |
754,005 | 26,370,331,138 | IssuesEvent | 2023-01-11 20:05:15 | Slicer/Slicer | https://api.github.com/repos/Slicer/Slicer | closed | Independent top level view windows so they can be on separate monitors | type:enhancement priority:medium | _This issue was created automatically from an original [Mantis Issue](https://mantisarchive.slicer.org/view.php?id=2104). Further discussion may take place here._ | 1.0 | Independent top level view windows so they can be on separate monitors - _This issue was created automatically from an original [Mantis Issue](https://mantisarchive.slicer.org/view.php?id=2104). Further discussion may take place here._ | priority | independent top level view windows so they can be on separate monitors this issue was created automatically from an original further discussion may take place here | 1 |
80,676 | 3,573,006,362 | IssuesEvent | 2016-01-27 02:51:50 | DistrictDataLabs/trinket | https://api.github.com/repos/DistrictDataLabs/trinket | closed | SSL 404 Redirect | priority: medium type: bug | Accessing Trinket on Heroku via http (instead of https) leads to a 404 redirect error when signing in with Google. | 1.0 | SSL 404 Redirect - Accessing Trinket on Heroku via http (instead of https) leads to a 404 redirect error when signing in with Google. | priority | ssl redirect accessing trinket on heroku via http instead of https leads to a redirect error when signing in with google | 1 |
709,487 | 24,379,786,168 | IssuesEvent | 2022-10-04 06:40:51 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Integrate Zod validator with http utils | priority-3-medium type:refactor status:ready | ### Describe the proposed change(s).
- [ ] Introduce optional parameter for response data validation
- [ ] Create validation-specific error type that fits our exception handling flow
| 1.0 | Integrate Zod validator with http utils - ### Describe the proposed change(s).
- [ ] Introduce optional parameter for response data validation
- [ ] Create validation-specific error type that fits our exception handling flow
| priority | integrate zod validator with http utils describe the proposed change s introduce optional parameter for response data validation create validation specific error type that fits our exception handling flow | 1 |
199,005 | 6,979,944,649 | IssuesEvent | 2017-12-12 23:01:17 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [Search] RestClient is missing file size when indexing binary documents. | bug priority: medium | When using this method to index binary documents:
```java
public String updateFile(String site, String id, File file, Map<String, List<String>> additionalFields) throws SearchException
```
https://github.com/craftercms/search/blob/2.5.x/crafter-search-client/src/main/java/org/craftercms/search/service/impl/RestClientSearchService.java#L216
The indexed data is missing the stream_size field:
```
"meta": [
"stream_size",
"null",
"stream_content_type",
"text/plain",
"Content-Encoding",
"ISO-8859-1",
"Content-Type",
"text/plain; charset=ISO-8859-1"
]
``` | 1.0 | [Search] RestClient is missing file size when indexing binary documents. - When using this method to index binary documents:
```java
public String updateFile(String site, String id, File file, Map<String, List<String>> additionalFields) throws SearchException
```
https://github.com/craftercms/search/blob/2.5.x/crafter-search-client/src/main/java/org/craftercms/search/service/impl/RestClientSearchService.java#L216
The indexed data is missing the stream_size field:
```
"meta": [
"stream_size",
"null",
"stream_content_type",
"text/plain",
"Content-Encoding",
"ISO-8859-1",
"Content-Type",
"text/plain; charset=ISO-8859-1"
]
``` | priority | restclient is missing file size when indexing binary documents when using this method to index binary documents java public string updatefile string site string id file file map additionalfields throws searchexception the indexed data is missing the stream size field meta stream size null stream content type text plain content encoding iso content type text plain charset iso | 1 |
113,173 | 4,544,110,845 | IssuesEvent | 2016-09-10 14:09:22 | 4-20ma/ModbusMaster | https://api.github.com/repos/4-20ma/ModbusMaster | opened | Add continuous integration testing with travis | Priority: Medium Status: In Progress Type: Feature Request | <!----------------------------------------------------------------------------
Title - ensure the issue title is clear & concise
- QUESTIONS - describe the specific question
- BUG REPORTS - describe an activity
- FEATURE REQUESTS - describe an activity
-->
<!----------------------------------------------------------------------------
Provide the following information for all issues. Replace [brackets] and placeholder text with your responses.
(QUESTIONS, BUG REPORTS, FEATURE REQUESTS)
-->
### ModbusMaster version
[Version of the project where you are encountering the issue]
### Arduino IDE version
[Version of Arduino IDE in your environment]
### Arduino Hardware
[Hardware information, including board and processor]
### Platform Details
[Operating system distribution and release version]
---
<!----------------------------------------------------------------------------
Provide the following for QUESTIONS & BUG REPORTS. Replace [brackets] and placeholder text with your responses.
-->
### Scenario:
[What you are trying to achieve and you can't?]
### Steps to Reproduce:
[If you are filing an issue what are the things we need to do in order to repro your problem? How are you using this project or any resources it includes?]
### Expected Result:
[What are you expecting to happen as the consequence of above reproduction steps?]
### Actual Result:
[What actually happens after the reproduction steps? Include the error output or a link to a gist if possible.]
---
<!----------------------------------------------------------------------------
Provide the following for FEATURE REQUESTS. Replace [brackets] and placeholder text with your responses.
Refer to [What's in a Story?](https://dannorth.net/whats-in-a-story/)
-->
### Feature Request
#### Narrative:
<!-- Replace role, feature, benefit. -->
As a [role]
I want [feature]
So that [benefit]
#### Acceptance Criteria:
<!--
Present as one or more Scenarios, replacing context, event, outcome.
-->
Scenario 1: Title
Given [context]
And [some more context]...
When [event]
Then [outcome]
And [another outcome]...
| 1.0 | Add continuous integration testing with travis - <!----------------------------------------------------------------------------
Title - ensure the issue title is clear & concise
- QUESTIONS - describe the specific question
- BUG REPORTS - describe an activity
- FEATURE REQUESTS - describe an activity
-->
<!----------------------------------------------------------------------------
Provide the following information for all issues. Replace [brackets] and placeholder text with your responses.
(QUESTIONS, BUG REPORTS, FEATURE REQUESTS)
-->
### ModbusMaster version
[Version of the project where you are encountering the issue]
### Arduino IDE version
[Version of Arduino IDE in your environment]
### Arduino Hardware
[Hardware information, including board and processor]
### Platform Details
[Operating system distribution and release version]
---
<!----------------------------------------------------------------------------
Provide the following for QUESTIONS & BUG REPORTS. Replace [brackets] and placeholder text with your responses.
-->
### Scenario:
[What you are trying to achieve and you can't?]
### Steps to Reproduce:
[If you are filing an issue what are the things we need to do in order to repro your problem? How are you using this project or any resources it includes?]
### Expected Result:
[What are you expecting to happen as the consequence of above reproduction steps?]
### Actual Result:
[What actually happens after the reproduction steps? Include the error output or a link to a gist if possible.]
---
<!----------------------------------------------------------------------------
Provide the following for FEATURE REQUESTS. Replace [brackets] and placeholder text with your responses.
Refer to [What's in a Story?](https://dannorth.net/whats-in-a-story/)
-->
### Feature Request
#### Narrative:
<!-- Replace role, feature, benefit. -->
As a [role]
I want [feature]
So that [benefit]
#### Acceptance Criteria:
<!--
Present as one or more Scenarios, replacing context, event, outcome.
-->
Scenario 1: Title
Given [context]
And [some more context]...
When [event]
Then [outcome]
And [another outcome]...
| priority | add continuous integration testing with travis title ensure the issue title is clear concise questions describe the specific question bug reports describe an activity feature requests describe an activity provide the following information for all issues replace and placeholder text with your responses questions bug reports feature requests modbusmaster version arduino ide version arduino hardware platform details provide the following for questions bug reports replace and placeholder text with your responses scenario steps to reproduce expected result actual result provide the following for feature requests replace and placeholder text with your responses refer to feature request narrative as a i want so that acceptance criteria present as one or more scenarios replacing context event outcome scenario title given and when then and | 1 |
673,761 | 23,029,996,998 | IssuesEvent | 2022-07-22 13:04:53 | frappe/erpnext | https://api.github.com/repos/frappe/erpnext | opened | Capacity for service unit missing in Quick Entry for Healthcare Service Unit | bug healthcare Enhancement Medium Priority V14 Pre Merge | ### Information about bug
The capacity for service unit is missing in Quick Entry for Healthcare Service Unit. So after filling in all details when a user tries submitting through quick entry it throws appropriate validation error and redirects to full form view
Expected :

Actual :

### Module
other
### Version
V14 Beta2
### Installation method
_No response_
### Relevant log output / Stack trace / Full Error Message.
_No response_ | 1.0 | Capacity for service unit missing in Quick Entry for Healthcare Service Unit - ### Information about bug
The capacity for service unit is missing in Quick Entry for Healthcare Service Unit. So after filling in all details when a user tries submitting through quick entry it throws appropriate validation error and redirects to full form view
Expected :

Actual :

### Module
other
### Version
V14 Beta2
### Installation method
_No response_
### Relevant log output / Stack trace / Full Error Message.
_No response_ | priority | capacity for service unit missing in quick entry for healthcare service unit information about bug the capacity for service unit is missing in quick entry for healthcare service unit so after filling in all details when a user tries submitting through quick entry it throws appropriate validation error and redirects to full form view expected actual module other version installation method no response relevant log output stack trace full error message no response | 1 |
618,267 | 19,431,148,221 | IssuesEvent | 2021-12-21 12:07:53 | vanjarosoftware/Vanjaro.Platform | https://api.github.com/repos/vanjarosoftware/Vanjaro.Platform | closed | Vanjaro Notification Task interface updated | Enhancement Release: Minor Priority: Medium Area: Backend | **Product & Version**
Bug applies to Vanjaro Platform or Vanjaro for DNN.
**Describe the bug**
Added option to add Title and Width of Notification Window or popup
| 1.0 | Vanjaro Notification Task interface updated - **Product & Version**
Bug applies to Vanjaro Platform or Vanjaro for DNN.
**Describe the bug**
Added option to add Title and Width of Notification Window or popup
| priority | vanjaro notification task interface updated product version bug applies to vanjaro platform or vanjaro for dnn describe the bug added option to add title and width of notification window or popup | 1 |
603,157 | 18,529,932,849 | IssuesEvent | 2021-10-21 03:47:25 | kubesphere/console | https://api.github.com/repos/kubesphere/console | closed | The StatefulSets list shows an error | kind/bug priority/medium | **Describe the bug**


**Versions used(KubeSphere/Kubernetes)**
KubeSphere: nightly-20210927
/kind bug
/@kubesphere/sig-console
/priority medium | 1.0 | The StatefulSets list shows an error - **Describe the bug**


**Versions used(KubeSphere/Kubernetes)**
KubeSphere: nightly-20210927
/kind bug
/@kubesphere/sig-console
/priority medium | priority | the statefulsets list shows an error describe the bug versions used kubesphere kubernetes kubesphere nightly kind bug kubesphere sig console priority medium | 1 |
421,153 | 12,254,457,008 | IssuesEvent | 2020-05-06 08:31:20 | BingLingGroup/autosub | https://api.github.com/repos/BingLingGroup/autosub | closed | Remove required dependency langcodes | Priority: Medium Status: Accepted Type: Enhancement | **Describe the solution you'd like**
Remove required dependency langcodes to avoid using marisa-trie because it needs C++ environment.
| 1.0 | Remove required dependency langcodes - **Describe the solution you'd like**
Remove required dependency langcodes to avoid using marisa-trie because it needs C++ environment.
| priority | remove required dependency langcodes describe the solution you d like remove required dependency langcodes to avoid using marisa trie because it needs c environment | 1 |
164,409 | 6,225,739,895 | IssuesEvent | 2017-07-10 16:51:34 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] certain dialog boxes break preview tools labels | bug Priority: Medium | <img width="1398" alt="screen shot 2017-06-23 at 10 42 40 am" src="https://user-images.githubusercontent.com/169432/27487290-4fe4e652-5801-11e7-8c8f-3e7741552a42.png">
# steps to reproduce
1. open a page in preview
2. open review tools, note the labels look fine
3. refresh, note the labels look fine
4. click on any context nav operations for the page (approve and publish, schedule, delete, history) Note that the labels in preview tools get reverted to the keys for the string.
| 1.0 | [studio-ui] certain dialog boxes break preview tools labels - <img width="1398" alt="screen shot 2017-06-23 at 10 42 40 am" src="https://user-images.githubusercontent.com/169432/27487290-4fe4e652-5801-11e7-8c8f-3e7741552a42.png">
# steps to reproduce
1. open a page in preview
2. open review tools, note the labels look fine
3. refresh, note the labels look fine
4. click on any context nav operations for the page (approve and publish, schedule, delete, history) Note that the labels in preview tools get reverted to the keys for the string.
| priority | certain dialog boxes break preview tools labels img width alt screen shot at am src steps to reproduce open a page in preview open review tools note the labels look fine refresh note the labels look fine click on any context nav operations for the page approve and publish schedule delete history note that the labels in preview tools get reverted to the keys for the string | 1 |
49,641 | 3,003,799,036 | IssuesEvent | 2015-07-25 08:25:43 | jayway/powermock | https://api.github.com/repos/jayway/powermock | opened | powermock not working in IntelliJ but working in eclipse | bug imported Priority-Medium | _From [goldbric...@gmail.com](https://code.google.com/u/108697047924162601890/) on November 28, 2014 08:28:24_
What steps will reproduce the problem? 1.create a static method
2.write to unit test by using powerMockito
3.not working in IntelliJ but working in eclipse What is the expected output? What do you see instead? Expected output was that test case runs successfully.But instead null pointer is thrown as powermock is unable to mock the class What version of the product are you using? On what operating system? IntelliJ,grails 2.2,jdk 1.6 and powermock:powermock-api-mockito:1.5 Please provide any additional information below.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=527_ | 1.0 | powermock not working in IntelliJ but working in eclipse - _From [goldbric...@gmail.com](https://code.google.com/u/108697047924162601890/) on November 28, 2014 08:28:24_
What steps will reproduce the problem? 1.create a static method
2.write to unit test by using powerMockito
3.not working in IntelliJ but working in eclipse What is the expected output? What do you see instead? Expected output was that test case runs successfully.But instead null pointer is thrown as powermock is unable to mock the class What version of the product are you using? On what operating system? IntelliJ,grails 2.2,jdk 1.6 and powermock:powermock-api-mockito:1.5 Please provide any additional information below.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=527_ | priority | powermock not working in intellij but working in eclipse from on november what steps will reproduce the problem create a static method write to unit test by using powermockito not working in intellij but working in eclipse what is the expected output what do you see instead expected output was that test case runs successfully but instead null pointer is thrown as powermock is unable to mock the class what version of the product are you using on what operating system intellij grails jdk and powermock powermock api mockito please provide any additional information below original issue | 1 |
576,006 | 17,068,786,627 | IssuesEvent | 2021-07-07 10:39:34 | gnosis/ido-ux | https://api.github.com/repos/gnosis/ido-ux | closed | Minimum Funding/Estimated Tokens Sold are not calculated when Minimum Funding value =<1 | QA QA passed bug medium priority | 1. Create an auction with Minimum Funding vale <1
2. Create an order for this auction with bidding amount < Minimum Funding
3. Check Minimum Funding/Estimated Tokens Sold fields
**AR**: Minimum Funding/Estimated Tokens Sold = 0


**ER**: circles are recalculated and filled in | 1.0 | Minimum Funding/Estimated Tokens Sold are not calculated when Minimum Funding value =<1 - 1. Create an auction with Minimum Funding vale <1
2. Create an order for this auction with bidding amount < Minimum Funding
3. Check Minimum Funding/Estimated Tokens Sold fields
**AR**: Minimum Funding/Estimated Tokens Sold = 0


**ER**: circles are recalculated and filled in | priority | minimum funding estimated tokens sold are not calculated when minimum funding value create an auction with minimum funding vale create an order for this auction with bidding amount minimum funding check minimum funding estimated tokens sold fields ar minimum funding estimated tokens sold er circles are recalculated and filled in | 1 |
622,838 | 19,657,710,827 | IssuesEvent | 2022-01-10 14:11:35 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | Need to update Validation Message if user trying to edit activity after allowed time expiration | bug priority-medium Stale | **Describe the bug**
if user try to edit post without refreshing page then validation message gets displayed “There was a problem while updating your post."
**To Reproduce**
https://www.loom.com/share/7a6ffd5ee0d948b4acadd83ee71d7194
**Expected behavior**
It should be proper infomatic response like - You are not allowed to edit activity anymore. Please contact site admin for more details.
**Jira issue** : [PROD-700]
[PROD-700]: https://buddyboss.atlassian.net/browse/PROD-700?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | Need to update Validation Message if user trying to edit activity after allowed time expiration - **Describe the bug**
if user try to edit post without refreshing page then validation message gets displayed “There was a problem while updating your post."
**To Reproduce**
https://www.loom.com/share/7a6ffd5ee0d948b4acadd83ee71d7194
**Expected behavior**
It should be proper infomatic response like - You are not allowed to edit activity anymore. Please contact site admin for more details.
**Jira issue** : [PROD-700]
[PROD-700]: https://buddyboss.atlassian.net/browse/PROD-700?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | need to update validation message if user trying to edit activity after allowed time expiration describe the bug if user try to edit post without refreshing page then validation message gets displayed “there was a problem while updating your post to reproduce expected behavior it should be proper infomatic response like you are not allowed to edit activity anymore please contact site admin for more details jira issue | 1 |
816,433 | 30,599,418,759 | IssuesEvent | 2023-07-22 06:59:58 | Weiver-project/Weiver | https://api.github.com/repos/Weiver-project/Weiver | closed | FE_[Feat]: 게시글 작성 페이지 | ✨feat 🟡 Priority: Medium | ## 📃To do List
- [x] 뒤로가기 아이콘 클릭 시, 이전 페이지로 이동
## 📃글 작성
- [x] 드롭다운 박스를 통해 게시글 유형 선택(리뷰, 잡담)
- [x] 제목, 작품명, 글 내용, 첨부파일 등록(유형,제목,내용,작품명에 required 속성 부여)
- [ ] 첨부파일(이미지)를 미리보기 출력
- [x] 작성하기 클릭 시, 정보 저장 요청
- [x] 게시글 수정하기의 경우 저장된 내용 모두 조회
| 1.0 | FE_[Feat]: 게시글 작성 페이지 - ## 📃To do List
- [x] 뒤로가기 아이콘 클릭 시, 이전 페이지로 이동
## 📃글 작성
- [x] 드롭다운 박스를 통해 게시글 유형 선택(리뷰, 잡담)
- [x] 제목, 작품명, 글 내용, 첨부파일 등록(유형,제목,내용,작품명에 required 속성 부여)
- [ ] 첨부파일(이미지)를 미리보기 출력
- [x] 작성하기 클릭 시, 정보 저장 요청
- [x] 게시글 수정하기의 경우 저장된 내용 모두 조회
| priority | fe 게시글 작성 페이지 📃to do list 뒤로가기 아이콘 클릭 시 이전 페이지로 이동 📃글 작성 드롭다운 박스를 통해 게시글 유형 선택 리뷰 잡담 제목 작품명 글 내용 첨부파일 등록 유형 제목 내용 작품명에 required 속성 부여 첨부파일 이미지 를 미리보기 출력 작성하기 클릭 시 정보 저장 요청 게시글 수정하기의 경우 저장된 내용 모두 조회 | 1 |
584,577 | 17,458,747,758 | IssuesEvent | 2021-08-06 07:23:39 | ansible-collections/azure | https://api.github.com/repos/ansible-collections/azure | closed | Add support for spot instances | has_pr medium_priority new_featrue | <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add support for creating spot instances in AWS and would like to be able to do the same in Azure rather than having to spin up only full-price instances. In the CLI this is done by adding a "--priority spot" to the VM instance creation command. See: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/spot-cli
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_virtualmachine
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
For instances that are (relatively) short-lived and/or have workloads that can be easily restarted on a new instance using spot instances is a way to reduce overall cloud spend. In my group we use this for running benchmarks that can be restarted but there are many other possible use cases.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| 1.0 | Add support for spot instances - <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add support for creating spot instances in AWS and would like to be able to do the same in Azure rather than having to spin up only full-price instances. In the CLI this is done by adding a "--priority spot" to the VM instance creation command. See: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/spot-cli
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_virtualmachine
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
For instances that are (relatively) short-lived and/or have workloads that can be easily restarted on a new instance using spot instances is a way to reduce overall cloud spend. In my group we use this for running benchmarks that can be restarted but there are many other possible use cases.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| priority | add support for spot instances summary add support for creating spot instances in aws and would like to be able to do the same in azure rather than having to spin up only full price instances in the cli this is done by adding a priority spot to the vm instance creation command see issue type feature idea component name azure rm virtualmachine additional information for instances that are relatively short lived and or have workloads that can be easily restarted on a new instance using spot instances is a way to reduce overall cloud spend in my group we use this for running benchmarks that can be restarted but there are many other possible use cases yaml | 1 |
508,046 | 14,688,990,924 | IssuesEvent | 2021-01-02 06:42:35 | emre1702/TDS-V-Public | https://api.github.com/repos/emre1702/TDS-V-Public | closed | [BUG] Winner in round ranking has wrong rotation | bug medium priority | He is looking to the wrong direction (back), why?
Tested with 2 players. | 1.0 | [BUG] Winner in round ranking has wrong rotation - He is looking to the wrong direction (back), why?
Tested with 2 players. | priority | winner in round ranking has wrong rotation he is looking to the wrong direction back why tested with players | 1 |
39,653 | 2,858,049,555 | IssuesEvent | 2015-06-02 23:03:38 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | pmemobj: user is able to open pool with NULL layout | Exposure: Medium Priority: 3 medium Type: Bug | It is possible to open pool with passed NULL as layout.
Steps to reproduce:
1. pop = pmemobj_create(/mnt/psmem_0/myfile, "pmemobj_layout", PMEMOBJ_MIN_POOL, 0666)
2. pmemobj_close(pop)
3. pop = pmemobj_open(/mnt/psmem_0/myfile, NULL)
Expected result:
Pool should not be opened, NULL pointer returned
Current result:
Valid pool pointer is returned, pool is accessible | 1.0 | pmemobj: user is able to open pool with NULL layout - It is possible to open pool with passed NULL as layout.
Steps to reproduce:
1. pop = pmemobj_create(/mnt/psmem_0/myfile, "pmemobj_layout", PMEMOBJ_MIN_POOL, 0666)
2. pmemobj_close(pop)
3. pop = pmemobj_open(/mnt/psmem_0/myfile, NULL)
Expected result:
Pool should not be opened, NULL pointer returned
Current result:
Valid pool pointer is returned, pool is accessible | priority | pmemobj user is able to open pool with null layout it is possible to open pool with passed null as layout steps to reproduce pop pmemobj create mnt psmem myfile pmemobj layout pmemobj min pool pmemobj close pop pop pmemobj open mnt psmem myfile null expected result pool should not be opened null pointer returned current result valid pool pointer is returned pool is accessible | 1 |
319,501 | 9,745,178,895 | IssuesEvent | 2019-06-03 09:00:36 | poanetwork/blockscout | https://api.github.com/repos/poanetwork/blockscout | opened | Failed to decode Ethereum JSONRPC response in debug_traceTransaction | bug :bug: chain: Go :chains: priority: medium | ```
2019-06-03T08:53:25.460 fetcher=internal_transaction count=10 [error] Task #PID<0.20387.48> started from Indexer.Fetcher.InternalTransaction terminating
** (EthereumJSONRPC.DecodeError) Failed to decode Ethereum JSONRPC response:
request:
url: http://XXX
body: [{"id":0,"jsonrpc":"2.0","method":"debug_traceTransaction","params":["0x5c2e1b83848a04805cf0f7ab653ea0b9c605fe9339625e6b97be940e111e4897",{"tracer":"// tracer allows Geth's `debug_traceTransaction` to mimic the output of Parity's `trace_replayTransaction`\n{\n // The call stack of the EVM execution.\n callStack: [{}],\n\n // step is invoked for every opcode that the VM executes.\n step(log, db) {\n // Capture any errors immediately\n const error = log.getError();\n\n if (error !== undefined) {\n this.fault(log, db);\n } else {\n this.success(log, db);\n }\n },\n\n // fault is invoked when the actual execution of an opcode fails.\n fault(log, db) {\n // If the topmost call already reverted, don't handle the additional fault again\n if (this.topCall().error === undefined) {\n this.putError(log);\n }\n },\n\n putError(log) {\n if (this.callStack.length > 1) {\n this.putErrorInTopCall(log);\n } else {\n this.putErrorInBottomCall(log);\n }\n },\n\n putErrorInTopCall(log) {\n // Pop off the just failed call\n const call = this.callStack.pop();\n this.putErrorInCall(log, call);\n this.pushChildCall(call);\n },\n\n putErrorInBottomCall(log) {\n const call = this.bottomCall();\n this.putErrorInCall(log, call);\n },\n\n putErrorInCall(log, call) {\n call.error = log.getError();\n\n // Consume all available gas and clean any leftovers\n if (call.gasBigInt !== undefined) {\n call.gasUsedBigInt = call.gasBigInt;\n }\n\n delete call.outputOffset;\n delete call.outputLength;\n },\n\n topCall() {\n return this.callStack[this.callStack.length - 1];\n },\n\n bottomCall() {\n return this.callStack[0];\n },\n\n pushChildCall(childCall) {\n const topCall = this.topCall();\n\n if (topCall.calls === undefined) {\n topCall.calls = [];\n }\n\n topCall.calls.push(childCall);\n },\n\n pushGasToTopCall(log) {\n const topCall = this.topCall();\n\n if (topCall.gasBigInt === undefined) {\n topCall.gasBigInt = log.getGas();\n }\n topCall.gasUsedBigInt = topCall.gasBigInt - log.getGas() - log.getCost();\n },\n\n success(log, db) {\n const op = log.op.toString();\n\n this.beforeOp(log, db);\n\n switch (op) {\n case 'CREATE':\n this.createOp(log);\n break;\n case 'SELFDESTRUCT':\n this.selfDestructOp(log, db);\n break;\n case 'CALL':\n case 'CALLCODE':\n case 'DELEGATECALL':\n case 'STATICCALL':\n this.callOp(log, op);\n break;\n case 'REVERT':\n this.revertOp();\n break;\n }\n },\n\n beforeOp(log, db) {\n /**\n * Depths\n * 0 - `ctx`. Never shows up in `log.getDepth()`\n * 1 - first level of `log.getDepth()`\n *\n * callStack indexes\n *\n * 0 - pseudo-call stand-in for `ctx` in initializer (`callStack: [{}]`)\n * 1 - first callOp inside of `ctx`\n */\n const logDepth = log.getDepth();\n const callStackDepth = this.callStack.length;\n\n if (logDepth < callStackDepth) {\n // Pop off the last call and get the execution results\n const call = this.callStack.pop();\n\n const ret = log.stack.peek(0);\n\n if (!ret.equals(0)) {\n if (call.type === 'create') {\n call.createdContractAddressHash = toHex(toAddress(ret.toString(16)));\n call.createdContractCode = toHex(db.getCode(toAddress(ret.toString(16))));\n } else {\n call.output = toHex(log.memory.slice(call.outputOffset, call.outputOffset + call.outputLength));\n }\n } else if (call.error === undefined) {\n call.error = 'internal failure';\n }\n\n delete call.outputOffset;\n delete call.outputLength;\n\n this.pushChildCall(call);\n }\n else {\n this.pushGasToTopCall(log);\n }\n },\n\n createOp(log) {\n const inputOffset = log.stack.peek(1).valueOf();\n const inputLength = log.stack.peek(2).valueOf();\n const inputEnd = inputOffset + inputLength;\n const stackValue = log.stack.peek(0);\n\n const call = {\n type: 'create',\n from: toHex(log.contract.getAddress()),\n init: toHex(log.memory.slice(inputOffset, inputEnd)),\n valueBigInt: bigInt(stackValue.toString(10))\n };\n this.callStack.push(call);\n },\n\n selfDestructOp(log, db) {\n const contractAddress = log.contract.getAddress();\n\n this.pushChildCall({\n type: 'selfdestruct',\n from: toHex(contractAddress),\n to: toHex(toAddress(log.stack.peek(0).toString(16))),\n gasBigInt: log.getGas(),\n gasUsedBigInt: log.getCost(),\n valueBigInt: db.getBalance(contractAddress)\n });\n },\n\n callOp(log, op) {\n const to = toAddress(log.stack.peek(1).toString(16));\n\n // Skip any pre-compile invocations, those are just fancy opcodes\n if (!isPrecompiled(to)) {\n this.callCustomOp(log, op, to);\n }\n },\n\n callCustomOp(log, op, to) {\n const stackOffset = (op === 'DELEGATECALL' || op === 'STATICCALL' ? 0 : 1);\n\n const inputOffset = log.stack.peek(2 + stackOffset).valueOf();\n const inputLength = log.stack.peek(3 + stackOffset).valueOf();\n const inputEnd = inputOffset + inputLength;\n\n const call = {\n type: 'call',\n callType: op.toLowerCase(),\n from: toHex(log.contract.getAddress()),\n to: toHex(to),\n input: toHex(log.memory.slice(inputOffset, inputEnd)),\n outputOffset: log.stack.peek(4 + stackOffset).valueOf(),\n outputLength: log.stack.peek(5 + stackOffset).valueOf()\n };\n\n switch (op) {\n case 'CALL':\n case 'CALLCODE':\n call.valueBigInt = bigInt(log.stack.peek(2));\n break;\n case 'DELEGATECALL':\n // value inherited from scope during call sequencing\n break;\n case 'STATICCALL':\n // by definition static calls transfer no value\n call.valueBigInt = bigInt.zero;\n break;\n default:\n throw \"Unknown custom call op \" + op;\n }\n\n this.callStack.push(call);\n },\n\n revertOp() {\n this.topCall().error = 'execution reverted';\n },\n\n // result is invoked when all the opcodes have been iterated over and returns\n // the final result of the tracing.\n result(ctx, db) {\n const result = this.ctxToResult(ctx, db);\n const filtered = this.filterNotUndefined(result);\n const callSequence = this.sequence(filtered, [], filtered.valueBigInt, []).callSequence;\n return this.encodeCallSequence(callSequence);\n },\n\n ctxToResult(ctx, db) {\n var result;\n\n switch (ctx.type) {\n case 'CALL':\n result = this.ctxToCall(ctx);\n break;\n case 'CREATE':\n result = this.ctxToCreate(ctx, db);\n break;\n }\n\n return result;\n },\n\n ctxToCall(ctx) {\n const result = {\n type: 'call',\n (truncated)
2019-06-03T08:53:29.948 application=indexer fetcher=internal_transaction count=10 error_count=10 [error] failed to fetch internal transactions for transactions: [%{data: %{block_number: 274672, transaction_hash: "0x67e18f6d65f8b4f2f0b051223c8f2653cac9481f71c19877745dc167d2c7a828", transaction_index: 0}, message: :timeout}, %{data: %{block_number: 667788, transaction_hash: "0xc17e39fae5ff832a67c58d6ff03d68124e3dddc092b33ef7b1d453fe9c5ff998", transaction_index: 0}, message: :timeout}]
2019-06-03T08:53:32.864 fetcher=internal_transaction count=10 [error] Task #PID<0.20408.48> started from Indexer.Fetcher.InternalTransaction terminating
** (FunctionClauseError) no function clause matching in EthereumJSONRPC.Geth.Call.elixir_to_internal_transaction_params/1
(ethereum_jsonrpc) lib/ethereum_jsonrpc/geth/call.ex:320: EthereumJSONRPC.Geth.Call.elixir_to_internal_transaction_params(%{"blockNumber" => 584340, "callType" => "staticcall", "error" => "execution reverted", "from" => "0x3858636f27e269d23db2ef1fcca5f93dcaa564cd", "gas" => 6468087, "gasUsed" => 5300, "index" => 6, "input" => "0x09d10a5e00000000000000000000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000000000000000000002", "to" => "0x79073fc2117dd054fcedacad1e7018c9cbe3ec0b", "traceAddress" => [1, 3], "transactionHash" => "0xbc38745b826f058ed2f6c93fa5b145323857f06bbb5230b6a6a50e09e0915857", "transactionIndex" => 0, "type" => "call", "value" => 0})
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(ethereum_jsonrpc) lib/ethereum_jsonrpc/geth.ex:168: EthereumJSONRPC.Geth.debug_trace_transaction_response_to_internal_transactions_params/2
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(ethereum_jsonrpc) lib/ethereum_jsonrpc/geth.ex:110: EthereumJSONRPC.Geth.debug_trace_transaction_responses_to_internal_transactions_params/3
(indexer) lib/indexer/fetcher/internal_transaction.ex:175: Indexer.Fetcher.InternalTransaction.run/2
Function: &Indexer.BufferedTask.log_run/1
Args: [%{batch: [{703508, <<172, 87, 70, 4, 166, 175, 182, 186, 170, 150, 159, 243, 10, 82, 176, 46, 205, 59, 88, 9, 244, 83, 27, 114, 241, 193, 248, 153, 197, 249, 191, 62>>, 0}, {703641, <<1, 77, 180, 230, 116, 153, 95, 219, 150, 65, 32, 125, 148, 133, 254, 91, 114, 120, 16, 123, 173, 113, 169, 71, 14, 174, 95, 183, 33, 201, 186, 249>>, 0}, {703610, <<209, 246, 209, 7, 24, 89, 221, 173, 59, 21, 155, 182, 254, 127, 24, 138, 84, 231, 190, 253, 76, 136, 207, 210, 220, 203, 251, 210, 101, 88, 46, 176>>, 0}, {703626, <<13, 65, 252, 103, 20, 46, 211, 1, 196, 185, 40, 30, 19, 189, 133, 175, 124, 241, 126, 109, 86, 128, 96, 239, 149, 47, 1, 21, 151, 183, 33, 227>>, 0}, {703339, <<112, 28, 128, 108, 228, 32, 164, 131, 68, 187, 212, 108, 31, 73, 96, 6, 255, 249, 200, 58, 239, 14, 91, 161, 224, 225, 119, 169, 168, 97, 99, 157>>, 1}, {703226, <<24, 232, 44, 163, 74, 8, 217, 245, 90, 156, 80, 221, 0, 152, 30, 180, 253, 165, 212, 91, 231, 150, 217, 185, 245, 143, 169, 73, 14, 108, 61, 136>>, 2}, {703624, <<171, 27, 196, 104, 235, 176, 43, 21, 139, 12, 208, 108, 41, 209, 215, 1, 69, 199, 141, 105, 189, 120, 167, 124, 89, 165, 128, 140, 88, 68, 44, 41>>, 0}, {584340, <<188, 56, 116, 91, 130, 111, 5, 142, 210, 246, 201, 63, 165, 177, 69, 50, 56, 87, 240, 107, 187, 82, 48, 182, 166, 165, 14, 9, 224, 145, 88, 87>>, 0}, {703260, <<174, 104, 96, 254, 193, 106, 194, 196, 104, 52, 42, 102, 224, 205, 75, 74, 19, 143, 33, 4, 217, 187, 77, 139, 83, 196, 91, 227, 135, 25, 156, 4>>, 0}, {703252, <<8, 216, 207, 41, 139, 177, 209, 170, 59, 202, 7, 33, 6, 73, 164, 30, 61, 197, 202, 65, 125, 230, 247, 4, 127, 223, 205, 79, 203, 250, 79, 180>>, 0}], callback_module: Indexer.Fetcher.InternalTransaction, callback_module_state: [transport: EthereumJSONRPC.HTTP, transport_options: [http: EthereumJSONRPC.HTTP.HTTPoison, url: "http://XXX", http_options: [recv_timeout: 600000, timeout: 600000, hackney: [pool: :ethereum_jsonrpc]]], variant: EthereumJSONRPC.Geth], metadata: [fetcher: :internal_transaction]}]
2019-06-03T08:53:37.104 application=indexer fetcher=internal_transaction count=10 error_count=10 [error] failed to fetch internal transactions for transactions: [%{data: %{block_number: 188681, transaction_hash: "0xdf9c1ee1491daddec829010934495388abbcdb46d7d13d5a2e67800366dc251a", transaction_index: 0}, message: :timeout}, %{data: %{block_number: 280711, transaction_hash: "0xa2617863c4c0a99e45fc9adad82b4d6888ed34d1acfe0790998b90ffc8fdf2ae", transaction_index: 0}, message: :timeout}, %{data: %{block_number: 247179, transaction_hash: "0xea02485b56042f3bf1dfe0fb754fad0c65cf7a11c1a24a52ade418488cd655d5", transaction_index: 0}, message: :timeout}]
``` | 1.0 | Failed to decode Ethereum JSONRPC response in debug_traceTransaction - ```
2019-06-03T08:53:25.460 fetcher=internal_transaction count=10 [error] Task #PID<0.20387.48> started from Indexer.Fetcher.InternalTransaction terminating
** (EthereumJSONRPC.DecodeError) Failed to decode Ethereum JSONRPC response:
request:
url: http://XXX
body: [{"id":0,"jsonrpc":"2.0","method":"debug_traceTransaction","params":["0x5c2e1b83848a04805cf0f7ab653ea0b9c605fe9339625e6b97be940e111e4897",{"tracer":"// tracer allows Geth's `debug_traceTransaction` to mimic the output of Parity's `trace_replayTransaction`\n{\n // The call stack of the EVM execution.\n callStack: [{}],\n\n // step is invoked for every opcode that the VM executes.\n step(log, db) {\n // Capture any errors immediately\n const error = log.getError();\n\n if (error !== undefined) {\n this.fault(log, db);\n } else {\n this.success(log, db);\n }\n },\n\n // fault is invoked when the actual execution of an opcode fails.\n fault(log, db) {\n // If the topmost call already reverted, don't handle the additional fault again\n if (this.topCall().error === undefined) {\n this.putError(log);\n }\n },\n\n putError(log) {\n if (this.callStack.length > 1) {\n this.putErrorInTopCall(log);\n } else {\n this.putErrorInBottomCall(log);\n }\n },\n\n putErrorInTopCall(log) {\n // Pop off the just failed call\n const call = this.callStack.pop();\n this.putErrorInCall(log, call);\n this.pushChildCall(call);\n },\n\n putErrorInBottomCall(log) {\n const call = this.bottomCall();\n this.putErrorInCall(log, call);\n },\n\n putErrorInCall(log, call) {\n call.error = log.getError();\n\n // Consume all available gas and clean any leftovers\n if (call.gasBigInt !== undefined) {\n call.gasUsedBigInt = call.gasBigInt;\n }\n\n delete call.outputOffset;\n delete call.outputLength;\n },\n\n topCall() {\n return this.callStack[this.callStack.length - 1];\n },\n\n bottomCall() {\n return this.callStack[0];\n },\n\n pushChildCall(childCall) {\n const topCall = this.topCall();\n\n if (topCall.calls === undefined) {\n topCall.calls = [];\n }\n\n topCall.calls.push(childCall);\n },\n\n pushGasToTopCall(log) {\n const topCall = this.topCall();\n\n if (topCall.gasBigInt === undefined) {\n topCall.gasBigInt = log.getGas();\n }\n topCall.gasUsedBigInt = topCall.gasBigInt - log.getGas() - log.getCost();\n },\n\n success(log, db) {\n const op = log.op.toString();\n\n this.beforeOp(log, db);\n\n switch (op) {\n case 'CREATE':\n this.createOp(log);\n break;\n case 'SELFDESTRUCT':\n this.selfDestructOp(log, db);\n break;\n case 'CALL':\n case 'CALLCODE':\n case 'DELEGATECALL':\n case 'STATICCALL':\n this.callOp(log, op);\n break;\n case 'REVERT':\n this.revertOp();\n break;\n }\n },\n\n beforeOp(log, db) {\n /**\n * Depths\n * 0 - `ctx`. Never shows up in `log.getDepth()`\n * 1 - first level of `log.getDepth()`\n *\n * callStack indexes\n *\n * 0 - pseudo-call stand-in for `ctx` in initializer (`callStack: [{}]`)\n * 1 - first callOp inside of `ctx`\n */\n const logDepth = log.getDepth();\n const callStackDepth = this.callStack.length;\n\n if (logDepth < callStackDepth) {\n // Pop off the last call and get the execution results\n const call = this.callStack.pop();\n\n const ret = log.stack.peek(0);\n\n if (!ret.equals(0)) {\n if (call.type === 'create') {\n call.createdContractAddressHash = toHex(toAddress(ret.toString(16)));\n call.createdContractCode = toHex(db.getCode(toAddress(ret.toString(16))));\n } else {\n call.output = toHex(log.memory.slice(call.outputOffset, call.outputOffset + call.outputLength));\n }\n } else if (call.error === undefined) {\n call.error = 'internal failure';\n }\n\n delete call.outputOffset;\n delete call.outputLength;\n\n this.pushChildCall(call);\n }\n else {\n this.pushGasToTopCall(log);\n }\n },\n\n createOp(log) {\n const inputOffset = log.stack.peek(1).valueOf();\n const inputLength = log.stack.peek(2).valueOf();\n const inputEnd = inputOffset + inputLength;\n const stackValue = log.stack.peek(0);\n\n const call = {\n type: 'create',\n from: toHex(log.contract.getAddress()),\n init: toHex(log.memory.slice(inputOffset, inputEnd)),\n valueBigInt: bigInt(stackValue.toString(10))\n };\n this.callStack.push(call);\n },\n\n selfDestructOp(log, db) {\n const contractAddress = log.contract.getAddress();\n\n this.pushChildCall({\n type: 'selfdestruct',\n from: toHex(contractAddress),\n to: toHex(toAddress(log.stack.peek(0).toString(16))),\n gasBigInt: log.getGas(),\n gasUsedBigInt: log.getCost(),\n valueBigInt: db.getBalance(contractAddress)\n });\n },\n\n callOp(log, op) {\n const to = toAddress(log.stack.peek(1).toString(16));\n\n // Skip any pre-compile invocations, those are just fancy opcodes\n if (!isPrecompiled(to)) {\n this.callCustomOp(log, op, to);\n }\n },\n\n callCustomOp(log, op, to) {\n const stackOffset = (op === 'DELEGATECALL' || op === 'STATICCALL' ? 0 : 1);\n\n const inputOffset = log.stack.peek(2 + stackOffset).valueOf();\n const inputLength = log.stack.peek(3 + stackOffset).valueOf();\n const inputEnd = inputOffset + inputLength;\n\n const call = {\n type: 'call',\n callType: op.toLowerCase(),\n from: toHex(log.contract.getAddress()),\n to: toHex(to),\n input: toHex(log.memory.slice(inputOffset, inputEnd)),\n outputOffset: log.stack.peek(4 + stackOffset).valueOf(),\n outputLength: log.stack.peek(5 + stackOffset).valueOf()\n };\n\n switch (op) {\n case 'CALL':\n case 'CALLCODE':\n call.valueBigInt = bigInt(log.stack.peek(2));\n break;\n case 'DELEGATECALL':\n // value inherited from scope during call sequencing\n break;\n case 'STATICCALL':\n // by definition static calls transfer no value\n call.valueBigInt = bigInt.zero;\n break;\n default:\n throw \"Unknown custom call op \" + op;\n }\n\n this.callStack.push(call);\n },\n\n revertOp() {\n this.topCall().error = 'execution reverted';\n },\n\n // result is invoked when all the opcodes have been iterated over and returns\n // the final result of the tracing.\n result(ctx, db) {\n const result = this.ctxToResult(ctx, db);\n const filtered = this.filterNotUndefined(result);\n const callSequence = this.sequence(filtered, [], filtered.valueBigInt, []).callSequence;\n return this.encodeCallSequence(callSequence);\n },\n\n ctxToResult(ctx, db) {\n var result;\n\n switch (ctx.type) {\n case 'CALL':\n result = this.ctxToCall(ctx);\n break;\n case 'CREATE':\n result = this.ctxToCreate(ctx, db);\n break;\n }\n\n return result;\n },\n\n ctxToCall(ctx) {\n const result = {\n type: 'call',\n (truncated)
2019-06-03T08:53:29.948 application=indexer fetcher=internal_transaction count=10 error_count=10 [error] failed to fetch internal transactions for transactions: [%{data: %{block_number: 274672, transaction_hash: "0x67e18f6d65f8b4f2f0b051223c8f2653cac9481f71c19877745dc167d2c7a828", transaction_index: 0}, message: :timeout}, %{data: %{block_number: 667788, transaction_hash: "0xc17e39fae5ff832a67c58d6ff03d68124e3dddc092b33ef7b1d453fe9c5ff998", transaction_index: 0}, message: :timeout}]
2019-06-03T08:53:32.864 fetcher=internal_transaction count=10 [error] Task #PID<0.20408.48> started from Indexer.Fetcher.InternalTransaction terminating
** (FunctionClauseError) no function clause matching in EthereumJSONRPC.Geth.Call.elixir_to_internal_transaction_params/1
(ethereum_jsonrpc) lib/ethereum_jsonrpc/geth/call.ex:320: EthereumJSONRPC.Geth.Call.elixir_to_internal_transaction_params(%{"blockNumber" => 584340, "callType" => "staticcall", "error" => "execution reverted", "from" => "0x3858636f27e269d23db2ef1fcca5f93dcaa564cd", "gas" => 6468087, "gasUsed" => 5300, "index" => 6, "input" => "0x09d10a5e00000000000000000000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000000000000000000002", "to" => "0x79073fc2117dd054fcedacad1e7018c9cbe3ec0b", "traceAddress" => [1, 3], "transactionHash" => "0xbc38745b826f058ed2f6c93fa5b145323857f06bbb5230b6a6a50e09e0915857", "transactionIndex" => 0, "type" => "call", "value" => 0})
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(ethereum_jsonrpc) lib/ethereum_jsonrpc/geth.ex:168: EthereumJSONRPC.Geth.debug_trace_transaction_response_to_internal_transactions_params/2
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
(ethereum_jsonrpc) lib/ethereum_jsonrpc/geth.ex:110: EthereumJSONRPC.Geth.debug_trace_transaction_responses_to_internal_transactions_params/3
(indexer) lib/indexer/fetcher/internal_transaction.ex:175: Indexer.Fetcher.InternalTransaction.run/2
Function: &Indexer.BufferedTask.log_run/1
Args: [%{batch: [{703508, <<172, 87, 70, 4, 166, 175, 182, 186, 170, 150, 159, 243, 10, 82, 176, 46, 205, 59, 88, 9, 244, 83, 27, 114, 241, 193, 248, 153, 197, 249, 191, 62>>, 0}, {703641, <<1, 77, 180, 230, 116, 153, 95, 219, 150, 65, 32, 125, 148, 133, 254, 91, 114, 120, 16, 123, 173, 113, 169, 71, 14, 174, 95, 183, 33, 201, 186, 249>>, 0}, {703610, <<209, 246, 209, 7, 24, 89, 221, 173, 59, 21, 155, 182, 254, 127, 24, 138, 84, 231, 190, 253, 76, 136, 207, 210, 220, 203, 251, 210, 101, 88, 46, 176>>, 0}, {703626, <<13, 65, 252, 103, 20, 46, 211, 1, 196, 185, 40, 30, 19, 189, 133, 175, 124, 241, 126, 109, 86, 128, 96, 239, 149, 47, 1, 21, 151, 183, 33, 227>>, 0}, {703339, <<112, 28, 128, 108, 228, 32, 164, 131, 68, 187, 212, 108, 31, 73, 96, 6, 255, 249, 200, 58, 239, 14, 91, 161, 224, 225, 119, 169, 168, 97, 99, 157>>, 1}, {703226, <<24, 232, 44, 163, 74, 8, 217, 245, 90, 156, 80, 221, 0, 152, 30, 180, 253, 165, 212, 91, 231, 150, 217, 185, 245, 143, 169, 73, 14, 108, 61, 136>>, 2}, {703624, <<171, 27, 196, 104, 235, 176, 43, 21, 139, 12, 208, 108, 41, 209, 215, 1, 69, 199, 141, 105, 189, 120, 167, 124, 89, 165, 128, 140, 88, 68, 44, 41>>, 0}, {584340, <<188, 56, 116, 91, 130, 111, 5, 142, 210, 246, 201, 63, 165, 177, 69, 50, 56, 87, 240, 107, 187, 82, 48, 182, 166, 165, 14, 9, 224, 145, 88, 87>>, 0}, {703260, <<174, 104, 96, 254, 193, 106, 194, 196, 104, 52, 42, 102, 224, 205, 75, 74, 19, 143, 33, 4, 217, 187, 77, 139, 83, 196, 91, 227, 135, 25, 156, 4>>, 0}, {703252, <<8, 216, 207, 41, 139, 177, 209, 170, 59, 202, 7, 33, 6, 73, 164, 30, 61, 197, 202, 65, 125, 230, 247, 4, 127, 223, 205, 79, 203, 250, 79, 180>>, 0}], callback_module: Indexer.Fetcher.InternalTransaction, callback_module_state: [transport: EthereumJSONRPC.HTTP, transport_options: [http: EthereumJSONRPC.HTTP.HTTPoison, url: "http://XXX", http_options: [recv_timeout: 600000, timeout: 600000, hackney: [pool: :ethereum_jsonrpc]]], variant: EthereumJSONRPC.Geth], metadata: [fetcher: :internal_transaction]}]
2019-06-03T08:53:37.104 application=indexer fetcher=internal_transaction count=10 error_count=10 [error] failed to fetch internal transactions for transactions: [%{data: %{block_number: 188681, transaction_hash: "0xdf9c1ee1491daddec829010934495388abbcdb46d7d13d5a2e67800366dc251a", transaction_index: 0}, message: :timeout}, %{data: %{block_number: 280711, transaction_hash: "0xa2617863c4c0a99e45fc9adad82b4d6888ed34d1acfe0790998b90ffc8fdf2ae", transaction_index: 0}, message: :timeout}, %{data: %{block_number: 247179, transaction_hash: "0xea02485b56042f3bf1dfe0fb754fad0c65cf7a11c1a24a52ade418488cd655d5", transaction_index: 0}, message: :timeout}]
``` | priority | failed to decode ethereum jsonrpc response in debug tracetransaction fetcher internal transaction count task pid started from indexer fetcher internaltransaction terminating ethereumjsonrpc decodeerror failed to decode ethereum jsonrpc response request url body n n step is invoked for every opcode that the vm executes n step log db n capture any errors immediately n const error log geterror n n if error undefined n this fault log db n else n this success log db n n n n fault is invoked when the actual execution of an opcode fails n fault log db n if the topmost call already reverted don t handle the additional fault again n if this topcall error undefined n this puterror log n n n n puterror log n if this callstack length n this puterrorintopcall log n else n this puterrorinbottomcall log n n n n puterrorintopcall log n pop off the just failed call n const call this callstack pop n this puterrorincall log call n this pushchildcall call n n n puterrorinbottomcall log n const call this bottomcall n this puterrorincall log call n n n puterrorincall log call n call error log geterror n n consume all available gas and clean any leftovers n if call gasbigint undefined n call gasusedbigint call gasbigint n n n delete call outputoffset n delete call outputlength n n n topcall n return this callstack n n n bottomcall n return this callstack n n n pushchildcall childcall n const topcall this topcall n n if topcall calls undefined n topcall calls n n n topcall calls push childcall n n n pushgastotopcall log n const topcall this topcall n n if topcall gasbigint undefined n topcall gasbigint log getgas n n topcall gasusedbigint topcall gasbigint log getgas log getcost n n n success log db n const op log op tostring n n this beforeop log db n n switch op n case create n this createop log n break n case selfdestruct n this selfdestructop log db n break n case call n case callcode n case delegatecall n case staticcall n this callop log op n break n case revert n this revertop n break n n n n beforeop log db n n depths n ctx never shows up in log getdepth n first level of log getdepth n n callstack indexes n n pseudo call stand in for ctx in initializer callstack n first callop inside of ctx n n const logdepth log getdepth n const callstackdepth this callstack length n n if logdepth callstackdepth n pop off the last call and get the execution results n const call this callstack pop n n const ret log stack peek n n if ret equals n if call type create n call createdcontractaddresshash tohex toaddress ret tostring n call createdcontractcode tohex db getcode toaddress ret tostring n else n call output tohex log memory slice call outputoffset call outputoffset call outputlength n n else if call error undefined n call error internal failure n n n delete call outputoffset n delete call outputlength n n this pushchildcall call n n else n this pushgastotopcall log n n n n createop log n const inputoffset log stack peek valueof n const inputlength log stack peek valueof n const inputend inputoffset inputlength n const stackvalue log stack peek n n const call n type create n from tohex log contract getaddress n init tohex log memory slice inputoffset inputend n valuebigint bigint stackvalue tostring n n this callstack push call n n n selfdestructop log db n const contractaddress log contract getaddress n n this pushchildcall n type selfdestruct n from tohex contractaddress n to tohex toaddress log stack peek tostring n gasbigint log getgas n gasusedbigint log getcost n valuebigint db getbalance contractaddress n n n n callop log op n const to toaddress log stack peek tostring n n skip any pre compile invocations those are just fancy opcodes n if isprecompiled to n this callcustomop log op to n n n n callcustomop log op to n const stackoffset op delegatecall op staticcall n n const inputoffset log stack peek stackoffset valueof n const inputlength log stack peek stackoffset valueof n const inputend inputoffset inputlength n n const call n type call n calltype op tolowercase n from tohex log contract getaddress n to tohex to n input tohex log memory slice inputoffset inputend n outputoffset log stack peek stackoffset valueof n outputlength log stack peek stackoffset valueof n n n switch op n case call n case callcode n call valuebigint bigint log stack peek n break n case delegatecall n value inherited from scope during call sequencing n break n case staticcall n by definition static calls transfer no value n call valuebigint bigint zero n break n default n throw unknown custom call op op n n n this callstack push call n n n revertop n this topcall error execution reverted n n n result is invoked when all the opcodes have been iterated over and returns n the final result of the tracing n result ctx db n const result this ctxtoresult ctx db n const filtered this filternotundefined result n const callsequence this sequence filtered filtered valuebigint callsequence n return this encodecallsequence callsequence n n n ctxtoresult ctx db n var result n n switch ctx type n case call n result this ctxtocall ctx n break n case create n result this ctxtocreate ctx db n break n n n return result n n n ctxtocall ctx n const result n type call n truncated application indexer fetcher internal transaction count error count failed to fetch internal transactions for transactions fetcher internal transaction count task pid started from indexer fetcher internaltransaction terminating functionclauseerror no function clause matching in ethereumjsonrpc geth call elixir to internal transaction params ethereum jsonrpc lib ethereum jsonrpc geth call ex ethereumjsonrpc geth call elixir to internal transaction params blocknumber calltype staticcall error execution reverted from gas gasused index input to traceaddress transactionhash transactionindex type call value elixir lib enum ex enum map lists map elixir lib enum ex enum map lists map ethereum jsonrpc lib ethereum jsonrpc geth ex ethereumjsonrpc geth debug trace transaction response to internal transactions params elixir lib enum ex enum map lists map elixir lib enum ex enum map lists map ethereum jsonrpc lib ethereum jsonrpc geth ex ethereumjsonrpc geth debug trace transaction responses to internal transactions params indexer lib indexer fetcher internal transaction ex indexer fetcher internaltransaction run function indexer bufferedtask log run args callback module indexer fetcher internaltransaction callback module state variant ethereumjsonrpc geth metadata application indexer fetcher internal transaction count error count failed to fetch internal transactions for transactions | 1 |
592,835 | 17,931,995,242 | IssuesEvent | 2021-09-10 10:29:47 | fangohr/nmag | https://api.github.com/repos/fangohr/nmag | closed | Put static html from Wiki into new github pages location | medium priority | The scraped and convert html is here: https://github.com/mhanberry1/nmag-www-archive/tree/master/nmag.soton.ac.uk
- This needs to be moved to http://nmag-project.github.io to be browsable
- Need to update link to Wiki in tabs on the left to point to the new location
@venkat004 - can you help? | 1.0 | Put static html from Wiki into new github pages location - The scraped and convert html is here: https://github.com/mhanberry1/nmag-www-archive/tree/master/nmag.soton.ac.uk
- This needs to be moved to http://nmag-project.github.io to be browsable
- Need to update link to Wiki in tabs on the left to point to the new location
@venkat004 - can you help? | priority | put static html from wiki into new github pages location the scraped and convert html is here this needs to be moved to to be browsable need to update link to wiki in tabs on the left to point to the new location can you help | 1 |
734,741 | 25,360,872,081 | IssuesEvent | 2022-11-20 21:44:11 | bounswe/bounswe2022group6 | https://api.github.com/repos/bounswe/bounswe2022group6 | closed | Implementing locmgr for location services | Priority: Medium State: In Progress Type: Development Backend | An API for retrieving location information should be implemented for requirements [1.1.1.2.5](https://github.com/bounswe/bounswe2022group6/wiki/Requirements#1112-adding-information-to-an-account), [1.1.1.3.7](https://github.com/bounswe/bounswe2022group6/wiki/Requirements#1113-editing-the-information-in-an-account), and [1.1.1.4.7](https://github.com/bounswe/bounswe2022group6/wiki/Requirements#1114-removing-information-from-an-account).
Location information precision should be in this manner: Country <b>/</b> State(City) <b>/</b> City(District) | 1.0 | Implementing locmgr for location services - An API for retrieving location information should be implemented for requirements [1.1.1.2.5](https://github.com/bounswe/bounswe2022group6/wiki/Requirements#1112-adding-information-to-an-account), [1.1.1.3.7](https://github.com/bounswe/bounswe2022group6/wiki/Requirements#1113-editing-the-information-in-an-account), and [1.1.1.4.7](https://github.com/bounswe/bounswe2022group6/wiki/Requirements#1114-removing-information-from-an-account).
Location information precision should be in this manner: Country <b>/</b> State(City) <b>/</b> City(District) | priority | implementing locmgr for location services an api for retrieving location information should be implemented for requirements and location information precision should be in this manner country state city city district | 1 |
56,217 | 3,078,574,086 | IssuesEvent | 2015-08-21 11:12:54 | MinetestForFun/minetest-minetestforfun-server | https://api.github.com/repos/MinetestForFun/minetest-minetestforfun-server | closed | Nouvelle idée de mod pour la peche | Modding Priority: Medium | Pour avoir plus de chance en pêche il nous faudrait un stuff de pêcheur
genre casque du pêcheur + bottes du pêcheur = rapidité de pêche augmentée de 50% par exemple | 1.0 | Nouvelle idée de mod pour la peche - Pour avoir plus de chance en pêche il nous faudrait un stuff de pêcheur
genre casque du pêcheur + bottes du pêcheur = rapidité de pêche augmentée de 50% par exemple | priority | nouvelle idée de mod pour la peche pour avoir plus de chance en pêche il nous faudrait un stuff de pêcheur genre casque du pêcheur bottes du pêcheur rapidité de pêche augmentée de par exemple | 1 |
674,243 | 23,044,084,117 | IssuesEvent | 2022-07-23 16:11:28 | capawesome-team/capacitor-firebase | https://api.github.com/repos/capawesome-team/capacitor-firebase | closed | feat(authentication): expose `AdditionalUserInfo` | feature priority: medium package: authentication | **Is your feature request related to an issue? Please describe:**
<!-- A clear and concise explanation of what the problem is. Former. [...] -->
Yeah. In social media entries (Facebook, Google, GooglePlay, Twitter, Apple etc.), the user's id value connected to the provider does not appear. (GoogleId value does not come for login with Google.)
**Describe your desired solution:**
<!-- A clear and concise description of what you want it to be. -->
When the user logs in, the user's id value can be added according to the response that the provider returns. (I left a small demo below)
**Describe the alternatives you are considering:**
<!-- A clear and concise description of any alternative solutions or features you are considering. -->
I mentioned it in the additional context section.
**Additional context:**
<!-- Add any other context or screenshots related to the feature request here. -->
I have attached the codes to be edited. This is for android only. It works successfully when I tested it for Facebook, Google and Twitter.
Thanks for your time!
1. FirebaseAuthenticationHelper.java
// updated one function
```java
// update params
public static JSObject createSignInResult(FirebaseUser user, AuthCredential credential, String idToken, String id) {
JSObject userResult = FirebaseAuthenticationHelper.createUserResult(user);
JSObject credentialResult = FirebaseAuthenticationHelper.createCredentialResult(credential, idToken); // update call params
JSObject result = new JSObject();
// add next line
userResult.put("id", id); // add this line
result.put("user", userResult);
result.put("credential", credentialResult);
return result;
}
```
2. FirebaseAuthentication.java
// updated two functions
```java
public void signInWithCustomToken(PluginCall call) {
boolean skipNativeAuth = this.config.getSkipNativeAuth();
if (skipNativeAuth) {
call.reject(ERROR_CUSTOM_TOKEN_SKIP_NATIVE_AUTH);
return;
}
String token = call.getString("token", "");
firebaseAuthInstance
.signInWithCustomToken(token)
.addOnCompleteListener(
plugin.getActivity(),
new OnCompleteListener<AuthResult>() {
@Override
public void onComplete(@NonNull Task<AuthResult> task) {
if (task.isSuccessful()) {
Log.d(FirebaseAuthenticationPlugin.TAG, "signInWithCustomToken succeeded.");
FirebaseUser user = getCurrentUser();
// updated next line
JSObject signInResult = FirebaseAuthenticationHelper.createSignInResult(user, null, null, null); // update call params
call.resolve(signInResult);
} else {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCustomToken failed.", task.getException());
call.reject(ERROR_SIGN_IN_FAILED);
}
}
}
)
.addOnFailureListener(
plugin.getActivity(),
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception) {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCustomToken failed.", exception);
call.reject(ERROR_SIGN_IN_FAILED);
}
}
);
}
// 1. update params
public void handleSuccessfulSignIn(final PluginCall call, AuthCredential credential, String idToken, String id) {
boolean skipNativeAuth = this.config.getSkipNativeAuth();
if (skipNativeAuth) {
// 2. update call params
JSObject signInResult = FirebaseAuthenticationHelper.createSignInResult(null, credential, idToken, id); // update call params
call.resolve(signInResult);
return;
}
firebaseAuthInstance
.signInWithCredential(credential)
.addOnCompleteListener(
plugin.getActivity(),
new OnCompleteListener<AuthResult>() {
@Override
public void onComplete(@NonNull Task<AuthResult> task) {
if (task.isSuccessful()) {
Log.d(FirebaseAuthenticationPlugin.TAG, "signInWithCredential succeeded.");
FirebaseUser user = getCurrentUser();
// 3. update call params
JSObject signInResult = FirebaseAuthenticationHelper.createSignInResult(user, credential, idToken, id); // update call params
call.resolve(signInResult);
} else {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCredential failed.", task.getException());
call.reject(ERROR_SIGN_IN_FAILED);
}
}
}
)
.addOnFailureListener(
plugin.getActivity(),
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception exception) {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCredential failed.", exception);
call.reject(ERROR_SIGN_IN_FAILED);
}
}
);
}
```
3. handlers/PlayGamesAuthProviderHandler.java
update one Function
```java
public void handleOnActivityResult(PluginCall call, ActivityResult result) {
Intent data = result.getData();
Task<GoogleSignInAccount> task = GoogleSignIn.getSignedInAccountFromIntent(data);
try {
GoogleSignInAccount account = task.getResult(ApiException.class);
String serverAuthCode = account.getServerAuthCode();
AuthCredential credential = PlayGamesAuthProvider.getCredential(serverAuthCode);
String idToken = account.getIdToken();
String id = account.getId(); // add this and update call from next line
pluginImplementation.handleSuccessfulSignIn(call, credential, idToken, id); // update call params
} catch (ApiException exception) {
pluginImplementation.handleFailedSignIn(call, null, exception);
}
}
```
4. handlers/PhoneAuthProviderHandler
// updated three functions
```java
private void handleVerificationCode(PluginCall call, String verificationId, String verificationCode) {
PhoneAuthCredential credential = PhoneAuthProvider.getCredential(verificationId, verificationCode);
// update call params from next line
pluginImplementation.handleSuccessfulSignIn(call, credential, null, null); // update call params
}
@Override
public void onVerificationCompleted(PhoneAuthCredential credential) {
// update call params from next line
pluginImplementation.handleSuccessfulSignIn(call, credential, null, null); // update call params
}
@Override
public void onCodeSent(@NonNull String verificationId, @NonNull PhoneAuthProvider.ForceResendingToken token) {
// update call params from next line
JSObject result = FirebaseAuthenticationHelper.createSignInResult(null, null, null, null); // update call params
result.put("verificationId", verificationId);
call.resolve(result);
}
```
5. handlers/OAuthProviderHandler
// update two functions
```java
private void startActivityForSignIn(final PluginCall call, OAuthProvider.Builder provider) {
pluginImplementation
.getFirebaseAuthInstance()
.startActivityForSignInWithProvider(pluginImplementation.getPlugin().getActivity(), provider.build())
.addOnSuccessListener(
authResult -> {
AuthCredential credential = authResult.getCredential();
// add next line and update call params
Object userId = authResult.getAdditionalUserInfo().getProfile().get("id");
pluginImplementation.handleSuccessfulSignIn(call, credential, null, userId.toString()); // update call params
}
)
.addOnFailureListener(exception -> pluginImplementation.handleFailedSignIn(call, null, exception));
}
private void finishActivityForSignIn(final PluginCall call, Task<AuthResult> pendingResultTask) {
pendingResultTask
.addOnSuccessListener(
authResult -> {
AuthCredential credential = authResult.getCredential();
// add next line and update call params
Object userId = authResult.getAdditionalUserInfo().getProfile().get("id");
pluginImplementation.handleSuccessfulSignIn(call, credential, null, userId.toString()); // update call params
}
)
.addOnFailureListener(exception -> pluginImplementation.handleFailedSignIn(call, null, exception));
}
```
6. handlers/GoogleAuthProviderHandler
update one function
```java
public void handleOnActivityResult(PluginCall call, ActivityResult result) {
Intent data = result.getData();
Task<GoogleSignInAccount> task = GoogleSignIn.getSignedInAccountFromIntent(data);
try {
GoogleSignInAccount account = task.getResult(ApiException.class);
String idToken = account.getIdToken();
// add next line and update call params
String id = account.getId();
AuthCredential credential = GoogleAuthProvider.getCredential(idToken, null);
pluginImplementation.handleSuccessfulSignIn(call, credential, idToken, id); // update call params
} catch (ApiException exception) {
pluginImplementation.handleFailedSignIn(call, null, exception);
}
}
```
7. handlers/FacebookAuthProviderHandler
update one function
```java
private void handleSuccessCallback(LoginResult loginResult) {
AccessToken accessToken = loginResult.getAccessToken();
String token = accessToken.getToken();
// add next line and update call params
String id = accessToken.getUserId();
AuthCredential credential = FacebookAuthProvider.getCredential(token);
pluginImplementation.handleSuccessfulSignIn(savedCall, credential, token, id); // update call params
}
```
| 1.0 | feat(authentication): expose `AdditionalUserInfo` - **Is your feature request related to an issue? Please describe:**
<!-- A clear and concise explanation of what the problem is. Former. [...] -->
Yeah. In social media entries (Facebook, Google, GooglePlay, Twitter, Apple etc.), the user's id value connected to the provider does not appear. (GoogleId value does not come for login with Google.)
**Describe your desired solution:**
<!-- A clear and concise description of what you want it to be. -->
When the user logs in, the user's id value can be added according to the response that the provider returns. (I left a small demo below)
**Describe the alternatives you are considering:**
<!-- A clear and concise description of any alternative solutions or features you are considering. -->
I mentioned it in the additional context section.
**Additional context:**
<!-- Add any other context or screenshots related to the feature request here. -->
I have attached the codes to be edited. This is for android only. It works successfully when I tested it for Facebook, Google and Twitter.
Thanks for your time!
1. FirebaseAuthenticationHelper.java
// updated one function
```java
// update params
public static JSObject createSignInResult(FirebaseUser user, AuthCredential credential, String idToken, String id) {
JSObject userResult = FirebaseAuthenticationHelper.createUserResult(user);
JSObject credentialResult = FirebaseAuthenticationHelper.createCredentialResult(credential, idToken); // update call params
JSObject result = new JSObject();
// add next line
userResult.put("id", id); // add this line
result.put("user", userResult);
result.put("credential", credentialResult);
return result;
}
```
2. FirebaseAuthentication.java
// updated two functions
```java
public void signInWithCustomToken(PluginCall call) {
boolean skipNativeAuth = this.config.getSkipNativeAuth();
if (skipNativeAuth) {
call.reject(ERROR_CUSTOM_TOKEN_SKIP_NATIVE_AUTH);
return;
}
String token = call.getString("token", "");
firebaseAuthInstance
.signInWithCustomToken(token)
.addOnCompleteListener(
plugin.getActivity(),
new OnCompleteListener<AuthResult>() {
@Override
public void onComplete(@NonNull Task<AuthResult> task) {
if (task.isSuccessful()) {
Log.d(FirebaseAuthenticationPlugin.TAG, "signInWithCustomToken succeeded.");
FirebaseUser user = getCurrentUser();
// updated next line
JSObject signInResult = FirebaseAuthenticationHelper.createSignInResult(user, null, null, null); // update call params
call.resolve(signInResult);
} else {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCustomToken failed.", task.getException());
call.reject(ERROR_SIGN_IN_FAILED);
}
}
}
)
.addOnFailureListener(
plugin.getActivity(),
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception) {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCustomToken failed.", exception);
call.reject(ERROR_SIGN_IN_FAILED);
}
}
);
}
// 1. update params
public void handleSuccessfulSignIn(final PluginCall call, AuthCredential credential, String idToken, String id) {
boolean skipNativeAuth = this.config.getSkipNativeAuth();
if (skipNativeAuth) {
// 2. update call params
JSObject signInResult = FirebaseAuthenticationHelper.createSignInResult(null, credential, idToken, id); // update call params
call.resolve(signInResult);
return;
}
firebaseAuthInstance
.signInWithCredential(credential)
.addOnCompleteListener(
plugin.getActivity(),
new OnCompleteListener<AuthResult>() {
@Override
public void onComplete(@NonNull Task<AuthResult> task) {
if (task.isSuccessful()) {
Log.d(FirebaseAuthenticationPlugin.TAG, "signInWithCredential succeeded.");
FirebaseUser user = getCurrentUser();
// 3. update call params
JSObject signInResult = FirebaseAuthenticationHelper.createSignInResult(user, credential, idToken, id); // update call params
call.resolve(signInResult);
} else {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCredential failed.", task.getException());
call.reject(ERROR_SIGN_IN_FAILED);
}
}
}
)
.addOnFailureListener(
plugin.getActivity(),
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception exception) {
Log.e(FirebaseAuthenticationPlugin.TAG, "signInWithCredential failed.", exception);
call.reject(ERROR_SIGN_IN_FAILED);
}
}
);
}
```
3. handlers/PlayGamesAuthProviderHandler.java
update one Function
```java
public void handleOnActivityResult(PluginCall call, ActivityResult result) {
Intent data = result.getData();
Task<GoogleSignInAccount> task = GoogleSignIn.getSignedInAccountFromIntent(data);
try {
GoogleSignInAccount account = task.getResult(ApiException.class);
String serverAuthCode = account.getServerAuthCode();
AuthCredential credential = PlayGamesAuthProvider.getCredential(serverAuthCode);
String idToken = account.getIdToken();
String id = account.getId(); // add this and update call from next line
pluginImplementation.handleSuccessfulSignIn(call, credential, idToken, id); // update call params
} catch (ApiException exception) {
pluginImplementation.handleFailedSignIn(call, null, exception);
}
}
```
4. handlers/PhoneAuthProviderHandler
// updated three functions
```java
private void handleVerificationCode(PluginCall call, String verificationId, String verificationCode) {
PhoneAuthCredential credential = PhoneAuthProvider.getCredential(verificationId, verificationCode);
// update call params from next line
pluginImplementation.handleSuccessfulSignIn(call, credential, null, null); // update call params
}
@Override
public void onVerificationCompleted(PhoneAuthCredential credential) {
// update call params from next line
pluginImplementation.handleSuccessfulSignIn(call, credential, null, null); // update call params
}
@Override
public void onCodeSent(@NonNull String verificationId, @NonNull PhoneAuthProvider.ForceResendingToken token) {
// update call params from next line
JSObject result = FirebaseAuthenticationHelper.createSignInResult(null, null, null, null); // update call params
result.put("verificationId", verificationId);
call.resolve(result);
}
```
5. handlers/OAuthProviderHandler
// update two functions
```java
private void startActivityForSignIn(final PluginCall call, OAuthProvider.Builder provider) {
pluginImplementation
.getFirebaseAuthInstance()
.startActivityForSignInWithProvider(pluginImplementation.getPlugin().getActivity(), provider.build())
.addOnSuccessListener(
authResult -> {
AuthCredential credential = authResult.getCredential();
// add next line and update call params
Object userId = authResult.getAdditionalUserInfo().getProfile().get("id");
pluginImplementation.handleSuccessfulSignIn(call, credential, null, userId.toString()); // update call params
}
)
.addOnFailureListener(exception -> pluginImplementation.handleFailedSignIn(call, null, exception));
}
private void finishActivityForSignIn(final PluginCall call, Task<AuthResult> pendingResultTask) {
pendingResultTask
.addOnSuccessListener(
authResult -> {
AuthCredential credential = authResult.getCredential();
// add next line and update call params
Object userId = authResult.getAdditionalUserInfo().getProfile().get("id");
pluginImplementation.handleSuccessfulSignIn(call, credential, null, userId.toString()); // update call params
}
)
.addOnFailureListener(exception -> pluginImplementation.handleFailedSignIn(call, null, exception));
}
```
6. handlers/GoogleAuthProviderHandler
update one function
```java
public void handleOnActivityResult(PluginCall call, ActivityResult result) {
Intent data = result.getData();
Task<GoogleSignInAccount> task = GoogleSignIn.getSignedInAccountFromIntent(data);
try {
GoogleSignInAccount account = task.getResult(ApiException.class);
String idToken = account.getIdToken();
// add next line and update call params
String id = account.getId();
AuthCredential credential = GoogleAuthProvider.getCredential(idToken, null);
pluginImplementation.handleSuccessfulSignIn(call, credential, idToken, id); // update call params
} catch (ApiException exception) {
pluginImplementation.handleFailedSignIn(call, null, exception);
}
}
```
7. handlers/FacebookAuthProviderHandler
update one function
```java
private void handleSuccessCallback(LoginResult loginResult) {
AccessToken accessToken = loginResult.getAccessToken();
String token = accessToken.getToken();
// add next line and update call params
String id = accessToken.getUserId();
AuthCredential credential = FacebookAuthProvider.getCredential(token);
pluginImplementation.handleSuccessfulSignIn(savedCall, credential, token, id); // update call params
}
```
| priority | feat authentication expose additionaluserinfo is your feature request related to an issue please describe yeah in social media entries facebook google googleplay twitter apple etc the user s id value connected to the provider does not appear googleid value does not come for login with google describe your desired solution when the user logs in the user s id value can be added according to the response that the provider returns i left a small demo below describe the alternatives you are considering i mentioned it in the additional context section additional context i have attached the codes to be edited this is for android only it works successfully when i tested it for facebook google and twitter thanks for your time firebaseauthenticationhelper java updated one function java update params public static jsobject createsigninresult firebaseuser user authcredential credential string idtoken string id jsobject userresult firebaseauthenticationhelper createuserresult user jsobject credentialresult firebaseauthenticationhelper createcredentialresult credential idtoken update call params jsobject result new jsobject add next line userresult put id id add this line result put user userresult result put credential credentialresult return result firebaseauthentication java updated two functions java public void signinwithcustomtoken plugincall call boolean skipnativeauth this config getskipnativeauth if skipnativeauth call reject error custom token skip native auth return string token call getstring token firebaseauthinstance signinwithcustomtoken token addoncompletelistener plugin getactivity new oncompletelistener override public void oncomplete nonnull task task if task issuccessful log d firebaseauthenticationplugin tag signinwithcustomtoken succeeded firebaseuser user getcurrentuser updated next line jsobject signinresult firebaseauthenticationhelper createsigninresult user null null null update call params call resolve signinresult else log e firebaseauthenticationplugin tag signinwithcustomtoken failed task getexception call reject error sign in failed addonfailurelistener plugin getactivity new onfailurelistener override public void onfailure nonnull exception log e firebaseauthenticationplugin tag signinwithcustomtoken failed exception call reject error sign in failed update params public void handlesuccessfulsignin final plugincall call authcredential credential string idtoken string id boolean skipnativeauth this config getskipnativeauth if skipnativeauth update call params jsobject signinresult firebaseauthenticationhelper createsigninresult null credential idtoken id update call params call resolve signinresult return firebaseauthinstance signinwithcredential credential addoncompletelistener plugin getactivity new oncompletelistener override public void oncomplete nonnull task task if task issuccessful log d firebaseauthenticationplugin tag signinwithcredential succeeded firebaseuser user getcurrentuser update call params jsobject signinresult firebaseauthenticationhelper createsigninresult user credential idtoken id update call params call resolve signinresult else log e firebaseauthenticationplugin tag signinwithcredential failed task getexception call reject error sign in failed addonfailurelistener plugin getactivity new onfailurelistener override public void onfailure nonnull exception exception log e firebaseauthenticationplugin tag signinwithcredential failed exception call reject error sign in failed handlers playgamesauthproviderhandler java update one function java public void handleonactivityresult plugincall call activityresult result intent data result getdata task task googlesignin getsignedinaccountfromintent data try googlesigninaccount account task getresult apiexception class string serverauthcode account getserverauthcode authcredential credential playgamesauthprovider getcredential serverauthcode string idtoken account getidtoken string id account getid add this and update call from next line pluginimplementation handlesuccessfulsignin call credential idtoken id update call params catch apiexception exception pluginimplementation handlefailedsignin call null exception handlers phoneauthproviderhandler updated three functions java private void handleverificationcode plugincall call string verificationid string verificationcode phoneauthcredential credential phoneauthprovider getcredential verificationid verificationcode update call params from next line pluginimplementation handlesuccessfulsignin call credential null null update call params override public void onverificationcompleted phoneauthcredential credential update call params from next line pluginimplementation handlesuccessfulsignin call credential null null update call params override public void oncodesent nonnull string verificationid nonnull phoneauthprovider forceresendingtoken token update call params from next line jsobject result firebaseauthenticationhelper createsigninresult null null null null update call params result put verificationid verificationid call resolve result handlers oauthproviderhandler update two functions java private void startactivityforsignin final plugincall call oauthprovider builder provider pluginimplementation getfirebaseauthinstance startactivityforsigninwithprovider pluginimplementation getplugin getactivity provider build addonsuccesslistener authresult authcredential credential authresult getcredential add next line and update call params object userid authresult getadditionaluserinfo getprofile get id pluginimplementation handlesuccessfulsignin call credential null userid tostring update call params addonfailurelistener exception pluginimplementation handlefailedsignin call null exception private void finishactivityforsignin final plugincall call task pendingresulttask pendingresulttask addonsuccesslistener authresult authcredential credential authresult getcredential add next line and update call params object userid authresult getadditionaluserinfo getprofile get id pluginimplementation handlesuccessfulsignin call credential null userid tostring update call params addonfailurelistener exception pluginimplementation handlefailedsignin call null exception handlers googleauthproviderhandler update one function java public void handleonactivityresult plugincall call activityresult result intent data result getdata task task googlesignin getsignedinaccountfromintent data try googlesigninaccount account task getresult apiexception class string idtoken account getidtoken add next line and update call params string id account getid authcredential credential googleauthprovider getcredential idtoken null pluginimplementation handlesuccessfulsignin call credential idtoken id update call params catch apiexception exception pluginimplementation handlefailedsignin call null exception handlers facebookauthproviderhandler update one function java private void handlesuccesscallback loginresult loginresult accesstoken accesstoken loginresult getaccesstoken string token accesstoken gettoken add next line and update call params string id accesstoken getuserid authcredential credential facebookauthprovider getcredential token pluginimplementation handlesuccessfulsignin savedcall credential token id update call params | 1 |
530,438 | 15,428,719,204 | IssuesEvent | 2021-03-06 00:36:40 | marklogic/marklogic-data-hub | https://api.github.com/repos/marklogic/marklogic-data-hub | closed | On redeploy, only remove hub modules | Enhancement priority:medium | When Quick Start redeploys, it wipes the modules database. Need to only remove the modules that DHF puts in the modules database and replace those. Anything put into the modules database by a different process or application (for instance, a slush-generator search app) should be left alone. | 1.0 | On redeploy, only remove hub modules - When Quick Start redeploys, it wipes the modules database. Need to only remove the modules that DHF puts in the modules database and replace those. Anything put into the modules database by a different process or application (for instance, a slush-generator search app) should be left alone. | priority | on redeploy only remove hub modules when quick start redeploys it wipes the modules database need to only remove the modules that dhf puts in the modules database and replace those anything put into the modules database by a different process or application for instance a slush generator search app should be left alone | 1 |
89,580 | 3,797,121,207 | IssuesEvent | 2016-03-23 05:31:42 | RickyGAkl/yahoo-finance-managed | https://api.github.com/repos/RickyGAkl/yahoo-finance-managed | closed | cannot downlload options data for OEX Index | auto-migrated Priority-Medium Type-Enhancement | ```
What steps will reproduce the problem?
1.trying to get options data
2.
3.
What is the expected output? What do you see instead?
Strikeprice, type, symbol etc..
Do you have other informations?
method getlastdetofstock works for cash equities
Suggestions to fix the defect?
```
Original issue reported on code.google.com by `alexandr...@gmail.com` on 6 Nov 2012 at 2:40
Attachments:
* [optdlTest.zip](https://storage.googleapis.com/google-code-attachments/yahoo-finance-managed/issue-63/comment-0/optdlTest.zip)
| 1.0 | cannot downlload options data for OEX Index - ```
What steps will reproduce the problem?
1.trying to get options data
2.
3.
What is the expected output? What do you see instead?
Strikeprice, type, symbol etc..
Do you have other informations?
method getlastdetofstock works for cash equities
Suggestions to fix the defect?
```
Original issue reported on code.google.com by `alexandr...@gmail.com` on 6 Nov 2012 at 2:40
Attachments:
* [optdlTest.zip](https://storage.googleapis.com/google-code-attachments/yahoo-finance-managed/issue-63/comment-0/optdlTest.zip)
| priority | cannot downlload options data for oex index what steps will reproduce the problem trying to get options data what is the expected output what do you see instead strikeprice type symbol etc do you have other informations method getlastdetofstock works for cash equities suggestions to fix the defect original issue reported on code google com by alexandr gmail com on nov at attachments | 1 |
103,912 | 4,187,296,994 | IssuesEvent | 2016-06-23 17:01:29 | isawnyu/isaw.web | https://api.github.com/repos/isawnyu/isaw.web | closed | cropping option missing from multiple content types (events, pages, possibly more) | deploy medium priority | When editing an Event Item with a lead image, I don't have the Cropping option to adjust cropping settings for that lead image (the way that it's an option for news items). | 1.0 | cropping option missing from multiple content types (events, pages, possibly more) - When editing an Event Item with a lead image, I don't have the Cropping option to adjust cropping settings for that lead image (the way that it's an option for news items). | priority | cropping option missing from multiple content types events pages possibly more when editing an event item with a lead image i don t have the cropping option to adjust cropping settings for that lead image the way that it s an option for news items | 1 |
786,505 | 27,657,441,702 | IssuesEvent | 2023-03-12 05:17:26 | prgrms-web-devcourse/Team-Kkini-Mukvengers-FE | https://api.github.com/repos/prgrms-web-devcourse/Team-Kkini-Mukvengers-FE | closed | 밥모임 상세 페이지에서 뒤로가기 버튼 만들기 | Priority: Medium Feature | ## 📕 작업 설명
> 밥모임 상세 페이지에서 뒤로가기 버튼 만들기
## 📖 To-Do list
- [ ] 밥모임 상세 페이지에서 뒤로가기 버튼 만들기 | 1.0 | 밥모임 상세 페이지에서 뒤로가기 버튼 만들기 - ## 📕 작업 설명
> 밥모임 상세 페이지에서 뒤로가기 버튼 만들기
## 📖 To-Do list
- [ ] 밥모임 상세 페이지에서 뒤로가기 버튼 만들기 | priority | 밥모임 상세 페이지에서 뒤로가기 버튼 만들기 📕 작업 설명 밥모임 상세 페이지에서 뒤로가기 버튼 만들기 📖 to do list 밥모임 상세 페이지에서 뒤로가기 버튼 만들기 | 1 |
226,816 | 7,523,222,793 | IssuesEvent | 2018-04-12 23:42:06 | Fireboyd78/mm2hook | https://api.github.com/repos/Fireboyd78/mm2hook | opened | Impose upper limit for speedometer/tachometer values | RV6 Roadmap enhancement help wanted medium priority | May require to be hardcoded, as new data cannot be added to existing vehicle information. | 1.0 | Impose upper limit for speedometer/tachometer values - May require to be hardcoded, as new data cannot be added to existing vehicle information. | priority | impose upper limit for speedometer tachometer values may require to be hardcoded as new data cannot be added to existing vehicle information | 1 |
630,379 | 20,107,305,429 | IssuesEvent | 2022-02-07 11:49:03 | eclipse/dirigible | https://api.github.com/repos/eclipse/dirigible | closed | [IDE] CSVIM Editor not opening csv files in the csv editor | bug web-ide usability priority-medium efforts-low | **Describe the bug**
The CSVIM editor does not specify an editor id, resulting in the csv file being opened by the default editor which is Monaco.
**To Reproduce**
Steps to reproduce the behavior:
1. Open a csvim file
2. Click on 'Open'
3. See issue
**Expected behavior**
The Monaco editor should be used only if the csv editor is not available.
**Desktop:**
- OS: macOS 12.2, Fedora Linux 35
- Browser: Firefox 96
- Version: Dirigible 6.1.22
| 1.0 | [IDE] CSVIM Editor not opening csv files in the csv editor - **Describe the bug**
The CSVIM editor does not specify an editor id, resulting in the csv file being opened by the default editor which is Monaco.
**To Reproduce**
Steps to reproduce the behavior:
1. Open a csvim file
2. Click on 'Open'
3. See issue
**Expected behavior**
The Monaco editor should be used only if the csv editor is not available.
**Desktop:**
- OS: macOS 12.2, Fedora Linux 35
- Browser: Firefox 96
- Version: Dirigible 6.1.22
| priority | csvim editor not opening csv files in the csv editor describe the bug the csvim editor does not specify an editor id resulting in the csv file being opened by the default editor which is monaco to reproduce steps to reproduce the behavior open a csvim file click on open see issue expected behavior the monaco editor should be used only if the csv editor is not available desktop os macos fedora linux browser firefox version dirigible | 1 |
535,615 | 15,691,608,564 | IssuesEvent | 2021-03-25 18:04:41 | CS506-Oversight/autorack-front | https://api.github.com/repos/CS506-Oversight/autorack-front | opened | Dark theme | Priority-Medium Type-Enhancements | Check if material-ui provides one. If not, customize one.
Ideal settings:
- Background color: `#323232` ~ `#505050`
- Font color: `#FFFFFF` | 1.0 | Dark theme - Check if material-ui provides one. If not, customize one.
Ideal settings:
- Background color: `#323232` ~ `#505050`
- Font color: `#FFFFFF` | priority | dark theme check if material ui provides one if not customize one ideal settings background color font color ffffff | 1 |
514,792 | 14,944,092,009 | IssuesEvent | 2021-01-26 00:35:27 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | mimxrt1050_evk: testcase tests/kernel/fatal/exception/ failed to be ran | bug priority: medium | To Reproduce
Steps to reproduce the behavior:
sanitycheck -p mimxrt1050_evk --device-testing --device-serial /dev/ttyACM0 -T tests/kernel/fatal/exception/
see error
*** Booting Zephyr OS build zephyr-v2.4.0-2092-g748e7b6d7515 ***
E: ***** MPU FAULT *****
E: Stacking error (context area might be not valid)
E: Data Access Violation
E: MMFAR Address: 0x80002474
E: r0/a1: 0x0b0d0e1f r1/a2: 0x0f1f1f0b r2/a3: 0x090f1f0e
E: r3/a4: 0x0f8d0f03 r12/ip: 0x4327471b r14/lr: 0x090f8c05
E: xpsr: 0x00000000
E: s[ 0]: 0x00000000 s[ 1]: 0x00000000 s[ 2]: 0x00000000 s[ 3]: 0x00000000
E: s[ 4]: 0x00000000 s[ 5]: 0x00000000 s[ 6]: 0x00000000 s[ 7]: 0x00000000
E: s[ 8]: 0x00000000 s[ 9]: 0xffffffff s[10]: 0x00000000 s[11]: 0x00000000
E: s[12]: 0x00000000 s[13]: 0xffffffff s[14]: 0x00000000 s[15]: 0x00000000
E: fpscr: 0x8000046c
E: Faulting instruction address (r15/pc): 0x00000000
E: >>> ZEPHYR FATAL ERROR 2: Stack overflow on CPU 0
E: Current thread: 0x80000358 (main)
Caught system error -- reason 2
Was not expecting a crash
Environment (please complete the following information):
OS: Fedora33
Toolchain: zephyr-sdk-0.11.4
Commit ID: 748e7b6d7515 | 1.0 | mimxrt1050_evk: testcase tests/kernel/fatal/exception/ failed to be ran - To Reproduce
Steps to reproduce the behavior:
sanitycheck -p mimxrt1050_evk --device-testing --device-serial /dev/ttyACM0 -T tests/kernel/fatal/exception/
see error
*** Booting Zephyr OS build zephyr-v2.4.0-2092-g748e7b6d7515 ***
E: ***** MPU FAULT *****
E: Stacking error (context area might be not valid)
E: Data Access Violation
E: MMFAR Address: 0x80002474
E: r0/a1: 0x0b0d0e1f r1/a2: 0x0f1f1f0b r2/a3: 0x090f1f0e
E: r3/a4: 0x0f8d0f03 r12/ip: 0x4327471b r14/lr: 0x090f8c05
E: xpsr: 0x00000000
E: s[ 0]: 0x00000000 s[ 1]: 0x00000000 s[ 2]: 0x00000000 s[ 3]: 0x00000000
E: s[ 4]: 0x00000000 s[ 5]: 0x00000000 s[ 6]: 0x00000000 s[ 7]: 0x00000000
E: s[ 8]: 0x00000000 s[ 9]: 0xffffffff s[10]: 0x00000000 s[11]: 0x00000000
E: s[12]: 0x00000000 s[13]: 0xffffffff s[14]: 0x00000000 s[15]: 0x00000000
E: fpscr: 0x8000046c
E: Faulting instruction address (r15/pc): 0x00000000
E: >>> ZEPHYR FATAL ERROR 2: Stack overflow on CPU 0
E: Current thread: 0x80000358 (main)
Caught system error -- reason 2
Was not expecting a crash
Environment (please complete the following information):
OS: Fedora33
Toolchain: zephyr-sdk-0.11.4
Commit ID: 748e7b6d7515 | priority | evk testcase tests kernel fatal exception failed to be ran to reproduce steps to reproduce the behavior sanitycheck p evk device testing device serial dev t tests kernel fatal exception see error booting zephyr os build zephyr e mpu fault e stacking error context area might be not valid e data access violation e mmfar address e e ip lr e xpsr e s s s s e s s s s e s s s s e s s s s e fpscr e faulting instruction address pc e zephyr fatal error stack overflow on cpu e current thread main caught system error reason was not expecting a crash environment please complete the following information os toolchain zephyr sdk commit id | 1 |
565,730 | 16,768,285,211 | IssuesEvent | 2021-06-14 11:48:29 | canonical-web-and-design/vanilla-framework | https://api.github.com/repos/canonical-web-and-design/vanilla-framework | closed | Typo in navigation examples | Priority: Medium | In the navigation examples the last item is using the wrong class name. | 1.0 | Typo in navigation examples - In the navigation examples the last item is using the wrong class name. | priority | typo in navigation examples in the navigation examples the last item is using the wrong class name | 1 |
612,563 | 19,025,799,147 | IssuesEvent | 2021-11-24 03:13:31 | crombird/meta | https://api.github.com/repos/crombird/meta | opened | Include prompts in slash command outputs | type/feature-request priority/3-medium integration/discord | Users can't see the slash command parameters until they click on the command name in the interaction. And this is not always intuitive. In the message format, the prompt immediately preceded the response, so you could tell what the response was for, i.e.
> **user**
> ?? search query
>
> **CROM**
> (response)
Slash commands just print out the user and the command name:
> **user** used **/search**
> └ (response)
The "fix" here would be to print out the search query in the message contents or the embed title. | 1.0 | Include prompts in slash command outputs - Users can't see the slash command parameters until they click on the command name in the interaction. And this is not always intuitive. In the message format, the prompt immediately preceded the response, so you could tell what the response was for, i.e.
> **user**
> ?? search query
>
> **CROM**
> (response)
Slash commands just print out the user and the command name:
> **user** used **/search**
> └ (response)
The "fix" here would be to print out the search query in the message contents or the embed title. | priority | include prompts in slash command outputs users can t see the slash command parameters until they click on the command name in the interaction and this is not always intuitive in the message format the prompt immediately preceded the response so you could tell what the response was for i e user search query crom response slash commands just print out the user and the command name user used search └ response the fix here would be to print out the search query in the message contents or the embed title | 1 |
114,690 | 4,642,737,507 | IssuesEvent | 2016-09-30 10:46:59 | softdevteam/krun | https://api.github.com/repos/softdevteam/krun | closed | Rework run_shell_cmd() | enhancement medium priority (a clear improvement but not a blocker for publication) | As discussed with @snim2:
* Move `util.run_shell_cmd()` into the platform instance.
* It should accept an optional argument `user`, which if not `None` uses `sudo` or `doas` to invoke the command as another user
* It should accept a list of arguments, and never a string.
* We should replace any manual `os.system()`, `subprocess.Popen()` with calls to the new method.
| 1.0 | Rework run_shell_cmd() - As discussed with @snim2:
* Move `util.run_shell_cmd()` into the platform instance.
* It should accept an optional argument `user`, which if not `None` uses `sudo` or `doas` to invoke the command as another user
* It should accept a list of arguments, and never a string.
* We should replace any manual `os.system()`, `subprocess.Popen()` with calls to the new method.
| priority | rework run shell cmd as discussed with move util run shell cmd into the platform instance it should accept an optional argument user which if not none uses sudo or doas to invoke the command as another user it should accept a list of arguments and never a string we should replace any manual os system subprocess popen with calls to the new method | 1 |
84,188 | 3,655,018,015 | IssuesEvent | 2016-02-17 14:57:03 | miracle091/transmission-remote-dotnet | https://api.github.com/repos/miracle091/transmission-remote-dotnet | closed | folder structure when adding torrent | Priority-Medium Type-Enhancement | ```
When adding a torrent, you see the files contained as a flat list.
If you download a 0-day warez pack wich sometimes conatinas thousands of files,
it can be rather messy selecting just some of the items in there.
In for example utorrent, the files are in a tree-view. I usually deselect
everything except for the odd few folders i want to download.
```
Original issue reported on code.google.com by `daniel.r...@gmail.com` on 2 Mar 2012 at 10:45 | 1.0 | folder structure when adding torrent - ```
When adding a torrent, you see the files contained as a flat list.
If you download a 0-day warez pack wich sometimes conatinas thousands of files,
it can be rather messy selecting just some of the items in there.
In for example utorrent, the files are in a tree-view. I usually deselect
everything except for the odd few folders i want to download.
```
Original issue reported on code.google.com by `daniel.r...@gmail.com` on 2 Mar 2012 at 10:45 | priority | folder structure when adding torrent when adding a torrent you see the files contained as a flat list if you download a day warez pack wich sometimes conatinas thousands of files it can be rather messy selecting just some of the items in there in for example utorrent the files are in a tree view i usually deselect everything except for the odd few folders i want to download original issue reported on code google com by daniel r gmail com on mar at | 1 |
179,017 | 6,620,901,525 | IssuesEvent | 2017-09-21 17:08:13 | crcn/tandem | https://api.github.com/repos/crcn/tandem | closed | History | Feature Medium priority | Not so important with Tandem since history exists with the users text editor. This feature will be required if the user decides to use the built-in text editor option (when it's implemented). Or if the editor is used online. | 1.0 | History - Not so important with Tandem since history exists with the users text editor. This feature will be required if the user decides to use the built-in text editor option (when it's implemented). Or if the editor is used online. | priority | history not so important with tandem since history exists with the users text editor this feature will be required if the user decides to use the built in text editor option when it s implemented or if the editor is used online | 1 |
794,862 | 28,052,549,823 | IssuesEvent | 2023-03-29 07:09:03 | AY2223S2-CS2103-F11-3/tp | https://api.github.com/repos/AY2223S2-CS2103-F11-3/tp | closed | `APPEND`, `REMOVE`, `REPLACE` for list attributes | priority.Medium type.Enhancement | Currently list attributes only support replace.
`APPEND` and `REMOVE` would be nice so that the user will not have to retype the entire attribute. | 1.0 | `APPEND`, `REMOVE`, `REPLACE` for list attributes - Currently list attributes only support replace.
`APPEND` and `REMOVE` would be nice so that the user will not have to retype the entire attribute. | priority | append remove replace for list attributes currently list attributes only support replace append and remove would be nice so that the user will not have to retype the entire attribute | 1 |
347,036 | 10,423,587,641 | IssuesEvent | 2019-09-16 11:49:12 | Th3-Fr3d/pmdbs | https://api.github.com/repos/Th3-Fr3d/pmdbs | closed | Redesign MainForm Header | medium priority tweak | Change the MainForm headers / titles to match the CertificateForm and BreachForm design | 1.0 | Redesign MainForm Header - Change the MainForm headers / titles to match the CertificateForm and BreachForm design | priority | redesign mainform header change the mainform headers titles to match the certificateform and breachform design | 1 |
566,238 | 16,816,236,747 | IssuesEvent | 2021-06-17 07:44:16 | ansible-collections/azure | https://api.github.com/repos/ansible-collections/azure | closed | Include data_disk name in return values for "azure_rm_virtualmachine_info" | has_pr medium_priority | ##### SUMMARY
Include data_disk name in return values for "azure_rm_virtualmachine_info"
##### ISSUE TYPE
- Current return values for the module doesn't capture the data disk names
##### COMPONENT NAME
azure_rm_virtualmachine_info
##### ADDITIONAL INFORMATION
- Have been working on an use case Azure VM disk expansion and the disk name or id is a mandatory field to query or extend the disk, the module "azure_rm_virtualmachine_info" can be used to capture the disk name and current size to be later used in subsequent tasks to extend the disk and validate.
- Azure CLI "az vm show" lists out the data disks with disk names.
| 1.0 | Include data_disk name in return values for "azure_rm_virtualmachine_info" - ##### SUMMARY
Include data_disk name in return values for "azure_rm_virtualmachine_info"
##### ISSUE TYPE
- Current return values for the module doesn't capture the data disk names
##### COMPONENT NAME
azure_rm_virtualmachine_info
##### ADDITIONAL INFORMATION
- Have been working on an use case Azure VM disk expansion and the disk name or id is a mandatory field to query or extend the disk, the module "azure_rm_virtualmachine_info" can be used to capture the disk name and current size to be later used in subsequent tasks to extend the disk and validate.
- Azure CLI "az vm show" lists out the data disks with disk names.
| priority | include data disk name in return values for azure rm virtualmachine info summary include data disk name in return values for azure rm virtualmachine info issue type current return values for the module doesn t capture the data disk names component name azure rm virtualmachine info additional information have been working on an use case azure vm disk expansion and the disk name or id is a mandatory field to query or extend the disk the module azure rm virtualmachine info can be used to capture the disk name and current size to be later used in subsequent tasks to extend the disk and validate azure cli az vm show lists out the data disks with disk names | 1 |
249,862 | 7,964,843,790 | IssuesEvent | 2018-07-13 23:58:24 | SETI/pds-opus | https://api.github.com/repos/SETI/pds-opus | closed | Inconsistent tooltip interface | A-Bug Effort 2 Medium Priority 3 | In some cases (like the search categories) you hover over the (i) to get the tooltip, but in other cases (widget titles) you click on the (i) to get the tooltip. | 1.0 | Inconsistent tooltip interface - In some cases (like the search categories) you hover over the (i) to get the tooltip, but in other cases (widget titles) you click on the (i) to get the tooltip. | priority | inconsistent tooltip interface in some cases like the search categories you hover over the i to get the tooltip but in other cases widget titles you click on the i to get the tooltip | 1 |
247,158 | 7,904,319,464 | IssuesEvent | 2018-07-02 03:38:09 | medic/medic-webapp | https://api.github.com/repos/medic/medic-webapp | closed | Determine how we migrate large instances from CouchDB 1.x to CouchDB 2.0 | Priority: 2 - Medium Status: 1 - Triaged Type: Technical issue Upgrading | We have some large instances, and it's going to be a pain to migrate to CouchDB 2.0, primary because it will force all long term sessions to be logged out (ie all CHWs will have to log back in).
We should look into how we can get around this, and if we definitely can't, strategies for migrating people over slowly (ie running both at the same time). | 1.0 | Determine how we migrate large instances from CouchDB 1.x to CouchDB 2.0 - We have some large instances, and it's going to be a pain to migrate to CouchDB 2.0, primary because it will force all long term sessions to be logged out (ie all CHWs will have to log back in).
We should look into how we can get around this, and if we definitely can't, strategies for migrating people over slowly (ie running both at the same time). | priority | determine how we migrate large instances from couchdb x to couchdb we have some large instances and it s going to be a pain to migrate to couchdb primary because it will force all long term sessions to be logged out ie all chws will have to log back in we should look into how we can get around this and if we definitely can t strategies for migrating people over slowly ie running both at the same time | 1 |
434,240 | 12,515,922,225 | IssuesEvent | 2020-06-03 08:32:21 | canonical-web-and-design/build.snapcraft.io | https://api.github.com/repos/canonical-web-and-design/build.snapcraft.io | closed | Can't change registered name when repo not configured | Priority: Medium | If the repo isn't configured with a snapcraft.yaml, you don't seem to be able to change the registered name. | 1.0 | Can't change registered name when repo not configured - If the repo isn't configured with a snapcraft.yaml, you don't seem to be able to change the registered name. | priority | can t change registered name when repo not configured if the repo isn t configured with a snapcraft yaml you don t seem to be able to change the registered name | 1 |
370,175 | 10,926,207,339 | IssuesEvent | 2019-11-22 14:15:42 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Need for a composite MBP + SG | priority: medium team: dynamics | We had several talks about this. #9665 is the most recent discussion that references this design.
The ability to easily merge MBP with SG is blocking in that many of our new IK and planning tools really need both in sync, with the ability to make changes and with simple APIs to perform both geometric queries and multibody queries. The ability to seamlessly work with a single context strongly relates to this issue.
EDIT (eric): Relates Anzu Issue 1312.
cc'ing @hongkai-dai, @avalenzu, @siyuanfeng-tri
| 1.0 | Need for a composite MBP + SG - We had several talks about this. #9665 is the most recent discussion that references this design.
The ability to easily merge MBP with SG is blocking in that many of our new IK and planning tools really need both in sync, with the ability to make changes and with simple APIs to perform both geometric queries and multibody queries. The ability to seamlessly work with a single context strongly relates to this issue.
EDIT (eric): Relates Anzu Issue 1312.
cc'ing @hongkai-dai, @avalenzu, @siyuanfeng-tri
| priority | need for a composite mbp sg we had several talks about this is the most recent discussion that references this design the ability to easily merge mbp with sg is blocking in that many of our new ik and planning tools really need both in sync with the ability to make changes and with simple apis to perform both geometric queries and multibody queries the ability to seamlessly work with a single context strongly relates to this issue edit eric relates anzu issue cc ing hongkai dai avalenzu siyuanfeng tri | 1 |
438,720 | 12,643,593,389 | IssuesEvent | 2020-06-16 10:03:51 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.0 staging-1607] Empty information in some places | Priority: Medium | It just happens
When you have created a contract
- Empty info about contract on contract board
- Empty information in the right panel side

Can be connected to #16681
| 1.0 | [0.9.0 staging-1607] Empty information in some places - It just happens
When you have created a contract
- Empty info about contract on contract board
- Empty information in the right panel side

Can be connected to #16681
| priority | empty information in some places it just happens when you have created a contract empty info about contract on contract board empty information in the right panel side can be connected to | 1 |
82,547 | 3,614,734,809 | IssuesEvent | 2016-02-06 06:53:07 | PowerPointLabs/PowerPointLabs | https://api.github.com/repos/PowerPointLabs/PowerPointLabs | opened | Update font family for all default styles | Difficulty.Easy Feature.PictureSlidesLab Priority.Medium | right now all default styles share the same font family, which may look ummmm.
it should have some kinds font families for those styles.
need to update setting in StyleOptionsFactory.
The result should be something like this

| 1.0 | Update font family for all default styles - right now all default styles share the same font family, which may look ummmm.
it should have some kinds font families for those styles.
need to update setting in StyleOptionsFactory.
The result should be something like this

| priority | update font family for all default styles right now all default styles share the same font family which may look ummmm it should have some kinds font families for those styles need to update setting in styleoptionsfactory the result should be something like this | 1 |
41,264 | 2,868,991,424 | IssuesEvent | 2015-06-05 22:25:33 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Pub get/upgrade stack trace when removing git dependency override | bug Fixed Priority-Medium | _Originally opened as dart-lang/sdk#22194_
*This issue was originally filed by jakemac53...@gmail.com*
_____
I started getting a stack trace from pub get/upgrade today, visible here https://gist.github.com/jakemac53/75c1e4740c359e766070.
This can be replicated by checking out this branch of the repo https://github.com/dart-lang/web-components/tree/htmlimport, running pub get (should work), then removing the dependency override and trying to run pub get again. | 1.0 | Pub get/upgrade stack trace when removing git dependency override - _Originally opened as dart-lang/sdk#22194_
*This issue was originally filed by jakemac53...@gmail.com*
_____
I started getting a stack trace from pub get/upgrade today, visible here https://gist.github.com/jakemac53/75c1e4740c359e766070.
This can be replicated by checking out this branch of the repo https://github.com/dart-lang/web-components/tree/htmlimport, running pub get (should work), then removing the dependency override and trying to run pub get again. | priority | pub get upgrade stack trace when removing git dependency override originally opened as dart lang sdk this issue was originally filed by gmail com i started getting a stack trace from pub get upgrade today visible here this can be replicated by checking out this branch of the repo running pub get should work then removing the dependency override and trying to run pub get again | 1 |
601,186 | 18,389,988,353 | IssuesEvent | 2021-10-12 03:24:51 | cse110-fa21-group27/cse110-fa21-group27 | https://api.github.com/repos/cse110-fa21-group27/cse110-fa21-group27 | opened | Team Video Assignment | Type: Documentation Priority: Medium Status: Accepted | Finish the Team Video Assignment. Remember it is a maximum of 2.5 minutes. | 1.0 | Team Video Assignment - Finish the Team Video Assignment. Remember it is a maximum of 2.5 minutes. | priority | team video assignment finish the team video assignment remember it is a maximum of minutes | 1 |
55,129 | 3,072,152,918 | IssuesEvent | 2015-08-19 15:36:18 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | waitForText timeout is ignored if text not found | bug imported Priority-Medium | _From [gaz...@gmail.com](https://code.google.com/u/113313170396315103068/) on September 06, 2011 01:34:39_
What steps will reproduce the problem? 1. Create a list with no string "needle".
2. call solo.waitForText("needle", 1, 2000, true)
What is the expected output?
needle wouldn't be found in searchFor call, but after some 2 seconds you'll get a not found return value
What do you see instead?
searchFor is stuck in an infinite loop, ignoring any timeout given What version of the product are you using? On what operating system? 2.5, Android 2.2 (Cyanogen 6) Please provide any additional information below. in function:
searchFor(Callable<Collection<T>> viewFetcherCallback, String regex, int expectedMinimumNumberOfMatches, boolean scroll);
There's a:
while (true) {
}
it should have an escape 'if' with a timeout.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=150_ | 1.0 | waitForText timeout is ignored if text not found - _From [gaz...@gmail.com](https://code.google.com/u/113313170396315103068/) on September 06, 2011 01:34:39_
What steps will reproduce the problem? 1. Create a list with no string "needle".
2. call solo.waitForText("needle", 1, 2000, true)
What is the expected output?
needle wouldn't be found in searchFor call, but after some 2 seconds you'll get a not found return value
What do you see instead?
searchFor is stuck in an infinite loop, ignoring any timeout given What version of the product are you using? On what operating system? 2.5, Android 2.2 (Cyanogen 6) Please provide any additional information below. in function:
searchFor(Callable<Collection<T>> viewFetcherCallback, String regex, int expectedMinimumNumberOfMatches, boolean scroll);
There's a:
while (true) {
}
it should have an escape 'if' with a timeout.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=150_ | priority | waitfortext timeout is ignored if text not found from on september what steps will reproduce the problem create a list with no string needle call solo waitfortext needle true what is the expected output needle wouldn t be found in searchfor call but after some seconds you ll get a not found return value what do you see instead searchfor is stuck in an infinite loop ignoring any timeout given what version of the product are you using on what operating system android cyanogen please provide any additional information below in function searchfor callable viewfetchercallback string regex int expectedminimumnumberofmatches boolean scroll there s a while true it should have an escape if with a timeout original issue | 1 |
203,644 | 7,068,145,180 | IssuesEvent | 2018-01-08 06:25:57 | gluster/glusterd2 | https://api.github.com/repos/gluster/glusterd2 | closed | Quorum support in GD2 | feature priority: medium | To have parity with GD1, GD2 needs to have server & client side quorum support. | 1.0 | Quorum support in GD2 - To have parity with GD1, GD2 needs to have server & client side quorum support. | priority | quorum support in to have parity with needs to have server client side quorum support | 1 |
426,740 | 12,378,825,604 | IssuesEvent | 2020-05-19 11:24:03 | threefoldtech/3bot_wallet | https://api.github.com/repos/threefoldtech/3bot_wallet | closed | Stellar Staging - Currency is shown in send even when the user doesn't have that currency, results in no From accounts to be selectable. | priority_medium type_bug | **Repro steps**
1) Have an account with no FreeTFT
2) Attempt to send FreeTFT
**Expected Result**
Option should not exist in the currency dropdown, only display currencies the user actually owns in the send transaction.
**Actual Result**

**System Info**
| 1.0 | Stellar Staging - Currency is shown in send even when the user doesn't have that currency, results in no From accounts to be selectable. - **Repro steps**
1) Have an account with no FreeTFT
2) Attempt to send FreeTFT
**Expected Result**
Option should not exist in the currency dropdown, only display currencies the user actually owns in the send transaction.
**Actual Result**

**System Info**
| priority | stellar staging currency is shown in send even when the user doesn t have that currency results in no from accounts to be selectable repro steps have an account with no freetft attempt to send freetft expected result option should not exist in the currency dropdown only display currencies the user actually owns in the send transaction actual result system info | 1 |
2,788 | 2,533,459,081 | IssuesEvent | 2015-01-23 23:36:45 | srabbelier-google/issue-export-test-3 | https://api.github.com/repos/srabbelier-google/issue-export-test-3 | closed | Migrate python SRC bindings into merged tree | Lang-Python Priority-Medium Type-Task | Original [issue 7](https://code.google.com/p/selenium/issues/detail?id=7) created by srabbelier-google on 2009-11-28T13:59:29.000Z:
The code and tests from:
selenium-rc/trunk/clients/python
should be integrated into the merged tree at:
branches/merge/selenium/{src|test}/py
The target to run them from the build should be:
rake test_selenium_py
| 1.0 | Migrate python SRC bindings into merged tree - Original [issue 7](https://code.google.com/p/selenium/issues/detail?id=7) created by srabbelier-google on 2009-11-28T13:59:29.000Z:
The code and tests from:
selenium-rc/trunk/clients/python
should be integrated into the merged tree at:
branches/merge/selenium/{src|test}/py
The target to run them from the build should be:
rake test_selenium_py
| priority | migrate python src bindings into merged tree original created by srabbelier google on the code and tests from selenium rc trunk clients python should be integrated into the merged tree at branches merge selenium src test py the target to run them from the build should be rake test selenium py | 1 |
547,349 | 16,041,565,545 | IssuesEvent | 2021-04-22 08:32:39 | SAP/xsk | https://api.github.com/repos/SAP/xsk | closed | [Engines] Explore the capabilities of testsconteiners library | core effort-medium priority-medium | Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.
It could be quite useful for testing our DB access layer. | 1.0 | [Engines] Explore the capabilities of testsconteiners library - Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.
It could be quite useful for testing our DB access layer. | priority | explore the capabilities of testsconteiners library testcontainers is a java library that supports junit tests providing lightweight throwaway instances of common databases selenium web browsers or anything else that can run in a docker container it could be quite useful for testing our db access layer | 1 |
55,141 | 3,072,162,398 | IssuesEvent | 2015-08-19 15:38:47 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | How to create test cases for Android PhoneGap Application? | bug imported invalid Priority-Medium | _From [vidhya.p...@hcl.com](https://code.google.com/u/118222096880548246241/) on September 20, 2011 21:15:35_
What steps will reproduce the problem? 1.solo.EditText(0,"name")
2.In test result it showing there is no EditText 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? robotium-solo-2.4 for Android 2.2 on Windows Operating system Please provide any additional information below. I have created phonegap applciation, in which having one activity.
By using LoadUrl function in that activity, the html file get loaded.
I have created two html input box and one button.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=162_ | 1.0 | How to create test cases for Android PhoneGap Application? - _From [vidhya.p...@hcl.com](https://code.google.com/u/118222096880548246241/) on September 20, 2011 21:15:35_
What steps will reproduce the problem? 1.solo.EditText(0,"name")
2.In test result it showing there is no EditText 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? robotium-solo-2.4 for Android 2.2 on Windows Operating system Please provide any additional information below. I have created phonegap applciation, in which having one activity.
By using LoadUrl function in that activity, the html file get loaded.
I have created two html input box and one button.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=162_ | priority | how to create test cases for android phonegap application from on september what steps will reproduce the problem solo edittext name in test result it showing there is no edittext what is the expected output what do you see instead what version of the product are you using on what operating system robotium solo for android on windows operating system please provide any additional information below i have created phonegap applciation in which having one activity by using loadurl function in that activity the html file get loaded i have created two html input box and one button original issue | 1 |
680,965 | 23,291,969,532 | IssuesEvent | 2022-08-06 01:42:57 | twidi/quantifier | https://api.github.com/repos/twidi/quantifier | closed | Try to keep the current date as when navigating | Type: Bug Workflow: 8 - Done Priority: 3 - Medium Scope: Core Scope: Interface Status: Confirmed Complexity: 2 - Medium | If we hare the 2022-08-05 and are in monthly mode, all links will have the date set to 2022-08-01, so the context is lost for example if we want to go in daily mode | 1.0 | Try to keep the current date as when navigating - If we hare the 2022-08-05 and are in monthly mode, all links will have the date set to 2022-08-01, so the context is lost for example if we want to go in daily mode | priority | try to keep the current date as when navigating if we hare the and are in monthly mode all links will have the date set to so the context is lost for example if we want to go in daily mode | 1 |
419,256 | 12,219,628,233 | IssuesEvent | 2020-05-01 22:16:46 | codidact/qpixel | https://api.github.com/repos/codidact/qpixel | closed | Per-category access restriction | area: backend priority: medium type: change request | A user on Writing Meta suggested a category where people could post their work, either for critique or just to share. A concern is that publishing something, even informally, can impede or even prevent selling that work to a publisher later -- some publishers won't buy what was previously publicly available for free, even if that was an earlier or unfinished draft. The thinking is that restricting read access to the category to signed-in users would suffice. See https://writing.codidact.com/questions/74774#answer-74810.
In thinking about this use case, I realized there's another use case for access-restricted stuff: moderator-only content. On my sites we sometimes wanted to have a place to stash things like templates for our own mod messages, notes about ongoing investigations, and suchlike, and mod chat on SE was terrible for that (and we're not planning chat, let alone private chat, for MVP).
Note that there are two factors here: visibility of the *content* (can be gated by trust level), and visibility of the *category*, i.e. even seeing that it exists. For the moderator case there'd be no reason for non-mods to see a tab (and entry in the category list) that they can't access. For the Writing case that's less clear; maybe we want a tab that, if you click on when not signed in, gives you a message about how to access that content. Or maybe it should stay hidden. We need to think more about that part.
| 1.0 | Per-category access restriction - A user on Writing Meta suggested a category where people could post their work, either for critique or just to share. A concern is that publishing something, even informally, can impede or even prevent selling that work to a publisher later -- some publishers won't buy what was previously publicly available for free, even if that was an earlier or unfinished draft. The thinking is that restricting read access to the category to signed-in users would suffice. See https://writing.codidact.com/questions/74774#answer-74810.
In thinking about this use case, I realized there's another use case for access-restricted stuff: moderator-only content. On my sites we sometimes wanted to have a place to stash things like templates for our own mod messages, notes about ongoing investigations, and suchlike, and mod chat on SE was terrible for that (and we're not planning chat, let alone private chat, for MVP).
Note that there are two factors here: visibility of the *content* (can be gated by trust level), and visibility of the *category*, i.e. even seeing that it exists. For the moderator case there'd be no reason for non-mods to see a tab (and entry in the category list) that they can't access. For the Writing case that's less clear; maybe we want a tab that, if you click on when not signed in, gives you a message about how to access that content. Or maybe it should stay hidden. We need to think more about that part.
| priority | per category access restriction a user on writing meta suggested a category where people could post their work either for critique or just to share a concern is that publishing something even informally can impede or even prevent selling that work to a publisher later some publishers won t buy what was previously publicly available for free even if that was an earlier or unfinished draft the thinking is that restricting read access to the category to signed in users would suffice see in thinking about this use case i realized there s another use case for access restricted stuff moderator only content on my sites we sometimes wanted to have a place to stash things like templates for our own mod messages notes about ongoing investigations and suchlike and mod chat on se was terrible for that and we re not planning chat let alone private chat for mvp note that there are two factors here visibility of the content can be gated by trust level and visibility of the category i e even seeing that it exists for the moderator case there d be no reason for non mods to see a tab and entry in the category list that they can t access for the writing case that s less clear maybe we want a tab that if you click on when not signed in gives you a message about how to access that content or maybe it should stay hidden we need to think more about that part | 1 |
553,781 | 16,381,908,528 | IssuesEvent | 2021-05-17 05:03:35 | clabe45/vidar | https://api.github.com/repos/clabe45/vidar | opened | Audio fade in effect | priority:medium type:enhancement | Add an audio effect that makes the target fade in from silence. It should have a duration property that controls how many seconds the effect should last after the start of the layer.
Note that this is an effect, not a transition, because it only requires one layer. | 1.0 | Audio fade in effect - Add an audio effect that makes the target fade in from silence. It should have a duration property that controls how many seconds the effect should last after the start of the layer.
Note that this is an effect, not a transition, because it only requires one layer. | priority | audio fade in effect add an audio effect that makes the target fade in from silence it should have a duration property that controls how many seconds the effect should last after the start of the layer note that this is an effect not a transition because it only requires one layer | 1 |
34,757 | 2,787,286,123 | IssuesEvent | 2015-05-08 03:47:35 | punongbayan-araullo/tickets | https://api.github.com/repos/punongbayan-araullo/tickets | opened | Remove Jeng Duque as recipient of "Document Accountability Accepted" notification | other priority - medium status - accepted system - archives | Remove Jeng Duque as recipient of "Document Accountability Accepted" notification | 1.0 | Remove Jeng Duque as recipient of "Document Accountability Accepted" notification - Remove Jeng Duque as recipient of "Document Accountability Accepted" notification | priority | remove jeng duque as recipient of document accountability accepted notification remove jeng duque as recipient of document accountability accepted notification | 1 |
144,840 | 5,546,695,985 | IssuesEvent | 2017-03-23 02:03:41 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Update the documentation according to #123 | bug medium priority | Issue #123 changed the behavior of "to" parameter in random_, the [documentation](http://pytorch.org/docs/tensors.html?highlight=random#torch.Tensor.random_) should be updated accordingly.
Currently it says "distribution over [from, to]" which mean random_(4) returns numbers among 0,1,2,3,4. | 1.0 | Update the documentation according to #123 - Issue #123 changed the behavior of "to" parameter in random_, the [documentation](http://pytorch.org/docs/tensors.html?highlight=random#torch.Tensor.random_) should be updated accordingly.
Currently it says "distribution over [from, to]" which mean random_(4) returns numbers among 0,1,2,3,4. | priority | update the documentation according to issue changed the behavior of to parameter in random the should be updated accordingly currently it says distribution over which mean random returns numbers among | 1 |
492,664 | 14,217,253,946 | IssuesEvent | 2020-11-17 10:06:05 | bounswe/bounswe2020group4 | https://api.github.com/repos/bounswe/bounswe2020group4 | closed | (BKND) Finalize the Project Plan | Backend Effort: Medium Priority: High Status: Completed Task: Assignment | Backend team will update the Project Plan to shape it into its final form.
Deadline: 19/11/20 | 1.0 | (BKND) Finalize the Project Plan - Backend team will update the Project Plan to shape it into its final form.
Deadline: 19/11/20 | priority | bknd finalize the project plan backend team will update the project plan to shape it into its final form deadline | 1 |
165,953 | 6,288,626,331 | IssuesEvent | 2017-07-19 17:24:24 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | After upgrade: 'command=send' instead of 'Send transaction: x ETH' is shown in Chats [send] | bug medium-priority | ### Description
*Type*: Bug
*Summary*: After upgrade: if send transaction was done before upgrade then it's shown as 'command=send' in chats

#### Expected behavior
'Send transaction: x ETH' is shown in Chats
#### Actual behavior
'command=send' in chats
### Reproduction
- Open Status installed from PlayStore v 0.9.9.
- Send some eth in 1-1 chat, so the last messages in Chats is "Send ETH: x ETH"
- upgrade from 0.9.9 to `send` PR build
- open Chats
### Additional Information
* Status version: 0.9.9
* Operating System: iOS and Android | 1.0 | After upgrade: 'command=send' instead of 'Send transaction: x ETH' is shown in Chats [send] - ### Description
*Type*: Bug
*Summary*: After upgrade: if send transaction was done before upgrade then it's shown as 'command=send' in chats

#### Expected behavior
'Send transaction: x ETH' is shown in Chats
#### Actual behavior
'command=send' in chats
### Reproduction
- Open Status installed from PlayStore v 0.9.9.
- Send some eth in 1-1 chat, so the last messages in Chats is "Send ETH: x ETH"
- upgrade from 0.9.9 to `send` PR build
- open Chats
### Additional Information
* Status version: 0.9.9
* Operating System: iOS and Android | priority | after upgrade command send instead of send transaction x eth is shown in chats description type bug summary after upgrade if send transaction was done before upgrade then it s shown as command send in chats expected behavior send transaction x eth is shown in chats actual behavior command send in chats reproduction open status installed from playstore v send some eth in chat so the last messages in chats is send eth x eth upgrade from to send pr build open chats additional information status version operating system ios and android | 1 |
275,353 | 8,575,594,343 | IssuesEvent | 2018-11-12 17:43:16 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | Update Spheral reader to handle double-precision coordinates. | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal | Will Schill has a need for VisIt to support double-precision coordinates for his Spheral Data.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1130
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Update Spheral reader to handle double-precision coordinates.
Assigned to: Kathleen Biagas
Category:
Target version: 2.5.2
Author: Kathleen Biagas
Start: 07/12/2012
Due date:
% Done: 0
Estimated time:
Created: 07/12/2012 12:35 pm
Updated: 07/13/2012 01:22 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Will Schill has a need for VisIt to support double-precision coordinates for his Spheral Data.
Comments:
Change internal storage to double, create vtkPoints and vtkDataArrays with type VTK_DOUBLE.SVN Revision 18733 (2.5RC), 18755 (trunk)
| 1.0 | Update Spheral reader to handle double-precision coordinates. - Will Schill has a need for VisIt to support double-precision coordinates for his Spheral Data.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1130
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Update Spheral reader to handle double-precision coordinates.
Assigned to: Kathleen Biagas
Category:
Target version: 2.5.2
Author: Kathleen Biagas
Start: 07/12/2012
Due date:
% Done: 0
Estimated time:
Created: 07/12/2012 12:35 pm
Updated: 07/13/2012 01:22 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Will Schill has a need for VisIt to support double-precision coordinates for his Spheral Data.
Comments:
Change internal storage to double, create vtkPoints and vtkDataArrays with type VTK_DOUBLE.SVN Revision 18733 (2.5RC), 18755 (trunk)
| priority | update spheral reader to handle double precision coordinates will schill has a need for visit to support double precision coordinates for his spheral data redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority normal subject update spheral reader to handle double precision coordinates assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group any description will schill has a need for visit to support double precision coordinates for his spheral data comments change internal storage to double create vtkpoints and vtkdataarrays with type vtk double svn revision trunk | 1 |
622,919 | 19,658,819,298 | IssuesEvent | 2022-01-10 15:07:17 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | BuddyBoss duplicate component page notice show a broken link | bug priority-medium t3-ready-for-qa Stale | **Describe the bug**
If I add duplicate page for BuddyBoss component page from BuddyBoss > Pages then the following message show at the top: (Each BuddyBoss Component needs its own WordPress page. The following WordPress Pages have more than one component associated with them: . Repair)
where Repair text is linked with a broken/invalid link.
Please refer to this screenshot: [https://prnt.sc/10bgxlh](https://prnt.sc/10bgxlh)
**To Reproduce**
Steps to reproduce the behavior:
Please assign some duplicate page for BuddyBoss component pages then you will find the message with broken link in any other admin pages.
**Support ticket links**
https://secure.helpscout.net/conversation/1441590842/128464?folderId=4314898
**Jira issue** : [PROD-849]
[PROD-849]: https://buddyboss.atlassian.net/browse/PROD-849?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | BuddyBoss duplicate component page notice show a broken link - **Describe the bug**
If I add duplicate page for BuddyBoss component page from BuddyBoss > Pages then the following message show at the top: (Each BuddyBoss Component needs its own WordPress page. The following WordPress Pages have more than one component associated with them: . Repair)
where Repair text is linked with a broken/invalid link.
Please refer to this screenshot: [https://prnt.sc/10bgxlh](https://prnt.sc/10bgxlh)
**To Reproduce**
Steps to reproduce the behavior:
Please assign some duplicate page for BuddyBoss component pages then you will find the message with broken link in any other admin pages.
**Support ticket links**
https://secure.helpscout.net/conversation/1441590842/128464?folderId=4314898
**Jira issue** : [PROD-849]
[PROD-849]: https://buddyboss.atlassian.net/browse/PROD-849?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | buddyboss duplicate component page notice show a broken link describe the bug if i add duplicate page for buddyboss component page from buddyboss pages then the following message show at the top each buddyboss component needs its own wordpress page the following wordpress pages have more than one component associated with them repair where repair text is linked with a broken invalid link please refer to this screenshot to reproduce steps to reproduce the behavior please assign some duplicate page for buddyboss component pages then you will find the message with broken link in any other admin pages support ticket links jira issue | 1 |
234,007 | 7,714,730,410 | IssuesEvent | 2018-05-23 03:52:46 | huridocs/uwazi | https://api.github.com/repos/huridocs/uwazi | closed | Stuck in "Processing..." state | Bug Priority: Medium | Looks like it is an intermittent bug. Just to keep this on the radar. | 1.0 | Stuck in "Processing..." state - Looks like it is an intermittent bug. Just to keep this on the radar. | priority | stuck in processing state looks like it is an intermittent bug just to keep this on the radar | 1 |
258,564 | 8,177,498,857 | IssuesEvent | 2018-08-28 10:55:23 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | closed | Declaration dublicate PROD (j239) | kind/support priority/medium | Декларація 5470beb0-1134-4948-99f3-49b0098eeee2 була заключена 2017-10-24 та не має номеру, тільки declaration_id.
2018-08-02 відбулася спроба перезаключення даної декларації, що закінчилась створенням дублікату (declaration_id: 08ef1bac-35dd-437a-b5d4-9052b70e1394 , declaration_number: 0000-4E6E-4641).
Не зрозуміло чому так сталось, адже обидві декларації мають однакові дані про персону, тому декларація повинна була перезаключитися (а попередня декларація на цього пацієнта: 5470beb0-1134-4948-99f3-49b0098eeee2 — мала розірватися).
Просимо допомогти у вирішенні проблеми. | 1.0 | Declaration dublicate PROD (j239) - Декларація 5470beb0-1134-4948-99f3-49b0098eeee2 була заключена 2017-10-24 та не має номеру, тільки declaration_id.
2018-08-02 відбулася спроба перезаключення даної декларації, що закінчилась створенням дублікату (declaration_id: 08ef1bac-35dd-437a-b5d4-9052b70e1394 , declaration_number: 0000-4E6E-4641).
Не зрозуміло чому так сталось, адже обидві декларації мають однакові дані про персону, тому декларація повинна була перезаключитися (а попередня декларація на цього пацієнта: 5470beb0-1134-4948-99f3-49b0098eeee2 — мала розірватися).
Просимо допомогти у вирішенні проблеми. | priority | declaration dublicate prod декларація була заключена та не має номеру тільки declaration id відбулася спроба перезаключення даної декларації що закінчилась створенням дублікату declaration id declaration number не зрозуміло чому так сталось адже обидві декларації мають однакові дані про персону тому декларація повинна була перезаключитися а попередня декларація на цього пацієнта — мала розірватися просимо допомогти у вирішенні проблеми | 1 |
252,275 | 8,034,001,286 | IssuesEvent | 2018-07-29 13:41:05 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | Consider FastUtil instead of Trove | > Enhancement Affects Datafiles Affects Performance Concerns Data Persistence Concerns GAML OS All Priority Medium Version Git | **Is your request related to a problem? Please describe.**
The Trove implementation (https://bitbucket.org/trove4j/trove) we are using for supporting a lot of functionalities (incl. GamaMap) begins to show its age, especially regarding the efficient support of streams. Moreover, it seems it has been superseded in terms of performance by newcomers, especially FastUtil (http://java-performance.info/hashmap-overview-jdk-fastutil-goldman-sachs-hppc-koloboke-trove-january-2015/), which has released a version not so long ago (http://fastutil.di.unimi.it).
Trove also sometimes faces difficulties in handling concurrent accesses, which seem to be much more elegantly solved with FastUtil
**Describe the solution you'd like**
We might want to progressively move to FastUtil in order to optimise maps, notably, at the beginning, for everything related to the compilation of GAML files (which uses zillions of maps all over the process).
**Describe alternatives you've considered**
The only drawback of FastUtil is that the current (home-made) implementation of TOrderedHashMap would need to be rewritten from scratch (it is the basis of GamaMap), but the good news is that FastUtil includes several implementations of "ordered maps" (including efficient TreeMaps).
| 1.0 | Consider FastUtil instead of Trove - **Is your request related to a problem? Please describe.**
The Trove implementation (https://bitbucket.org/trove4j/trove) we are using for supporting a lot of functionalities (incl. GamaMap) begins to show its age, especially regarding the efficient support of streams. Moreover, it seems it has been superseded in terms of performance by newcomers, especially FastUtil (http://java-performance.info/hashmap-overview-jdk-fastutil-goldman-sachs-hppc-koloboke-trove-january-2015/), which has released a version not so long ago (http://fastutil.di.unimi.it).
Trove also sometimes faces difficulties in handling concurrent accesses, which seem to be much more elegantly solved with FastUtil
**Describe the solution you'd like**
We might want to progressively move to FastUtil in order to optimise maps, notably, at the beginning, for everything related to the compilation of GAML files (which uses zillions of maps all over the process).
**Describe alternatives you've considered**
The only drawback of FastUtil is that the current (home-made) implementation of TOrderedHashMap would need to be rewritten from scratch (it is the basis of GamaMap), but the good news is that FastUtil includes several implementations of "ordered maps" (including efficient TreeMaps).
| priority | consider fastutil instead of trove is your request related to a problem please describe the trove implementation we are using for supporting a lot of functionalities incl gamamap begins to show its age especially regarding the efficient support of streams moreover it seems it has been superseded in terms of performance by newcomers especially fastutil which has released a version not so long ago trove also sometimes faces difficulties in handling concurrent accesses which seem to be much more elegantly solved with fastutil describe the solution you d like we might want to progressively move to fastutil in order to optimise maps notably at the beginning for everything related to the compilation of gaml files which uses zillions of maps all over the process describe alternatives you ve considered the only drawback of fastutil is that the current home made implementation of torderedhashmap would need to be rewritten from scratch it is the basis of gamamap but the good news is that fastutil includes several implementations of ordered maps including efficient treemaps | 1 |
212,414 | 7,236,840,387 | IssuesEvent | 2018-02-13 08:56:30 | EmanueleC/WirelessNetworkNotes | https://api.github.com/repos/EmanueleC/WirelessNetworkNotes | closed | "Loss recovery" subsection needs to be deleted | bug priority:medium | **Description**
This subsection (in TCP) is useless because it's a repetition of the content above it. | 1.0 | "Loss recovery" subsection needs to be deleted - **Description**
This subsection (in TCP) is useless because it's a repetition of the content above it. | priority | loss recovery subsection needs to be deleted description this subsection in tcp is useless because it s a repetition of the content above it | 1 |
440,674 | 12,702,628,567 | IssuesEvent | 2020-06-22 20:34:01 | IngenioUN/back_end | https://api.github.com/repos/IngenioUN/back_end | closed | Historia #20 - Ver seguidos | Cap: Back-End Priority: Medium Type: New | - Ver las personas que sigue el usuario logueado
- Ver las personas que sigue otro usuario | 1.0 | Historia #20 - Ver seguidos - - Ver las personas que sigue el usuario logueado
- Ver las personas que sigue otro usuario | priority | historia ver seguidos ver las personas que sigue el usuario logueado ver las personas que sigue otro usuario | 1 |
796,687 | 28,124,122,873 | IssuesEvent | 2023-03-31 16:13:02 | Esri/arcgis-maps-sdk-swift-toolkit | https://api.github.com/repos/Esri/arcgis-maps-sdk-swift-toolkit | closed | [Floor Filter Component] Exposing sites, facilities and levels read-only info. | Priority - medium | Currently there is no way of getting sites, facilities and levels info from floor filter, especially the selected site, selected facility and selected level info. Consuming apps will need this info to keep a track of user's location if the app deals with location related functionality.
Looking at the current floor filter toolkit code, looks like floor filter view model has all the selection details, but the properties are private. These details can be read-only when exposed.
Please allow access to the details of the sites, facilities and levels in the map especially selected site, selected facility and selected level since other details can also be obtained from floor manager.
@dfeinzimer - FYI.
| 1.0 | [Floor Filter Component] Exposing sites, facilities and levels read-only info. - Currently there is no way of getting sites, facilities and levels info from floor filter, especially the selected site, selected facility and selected level info. Consuming apps will need this info to keep a track of user's location if the app deals with location related functionality.
Looking at the current floor filter toolkit code, looks like floor filter view model has all the selection details, but the properties are private. These details can be read-only when exposed.
Please allow access to the details of the sites, facilities and levels in the map especially selected site, selected facility and selected level since other details can also be obtained from floor manager.
@dfeinzimer - FYI.
| priority | exposing sites facilities and levels read only info currently there is no way of getting sites facilities and levels info from floor filter especially the selected site selected facility and selected level info consuming apps will need this info to keep a track of user s location if the app deals with location related functionality looking at the current floor filter toolkit code looks like floor filter view model has all the selection details but the properties are private these details can be read only when exposed please allow access to the details of the sites facilities and levels in the map especially selected site selected facility and selected level since other details can also be obtained from floor manager dfeinzimer fyi | 1 |
445,621 | 12,833,850,504 | IssuesEvent | 2020-07-07 10:00:03 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | reopened | Client failing to auto-reconnect after server restart | Category: Tech Priority: Medium Status: Investigate Week Task | This used to work: client would detect disconnected server has returned, it would disconnect and reconnect.
Now it gets stuck here:

Medium pri because this is important for development iteration | 1.0 | Client failing to auto-reconnect after server restart - This used to work: client would detect disconnected server has returned, it would disconnect and reconnect.
Now it gets stuck here:

Medium pri because this is important for development iteration | priority | client failing to auto reconnect after server restart this used to work client would detect disconnected server has returned it would disconnect and reconnect now it gets stuck here medium pri because this is important for development iteration | 1 |
441,421 | 12,717,577,210 | IssuesEvent | 2020-06-24 05:34:14 | stephanosio/zephyr-crosstool-ng | https://api.github.com/repos/stephanosio/zephyr-crosstool-ng | closed | Implement Snap packages | RFC distribution priority: medium | Implement Snap packages
Note:
* Snap packages shall target the `core18` base image, which is derived from the Ubuntu 18.04 LTS. This provides an Ubuntu 18.04-compatible library system on all Linux distros and effectively allows the packaged executables to run in a known environment (just like Poky in the `meta-zephyr-sdk` does).
* See https://snapcraft.io/docs/pre-built-apps for building Snap packages from the pre-built archives. | 1.0 | Implement Snap packages - Implement Snap packages
Note:
* Snap packages shall target the `core18` base image, which is derived from the Ubuntu 18.04 LTS. This provides an Ubuntu 18.04-compatible library system on all Linux distros and effectively allows the packaged executables to run in a known environment (just like Poky in the `meta-zephyr-sdk` does).
* See https://snapcraft.io/docs/pre-built-apps for building Snap packages from the pre-built archives. | priority | implement snap packages implement snap packages note snap packages shall target the base image which is derived from the ubuntu lts this provides an ubuntu compatible library system on all linux distros and effectively allows the packaged executables to run in a known environment just like poky in the meta zephyr sdk does see for building snap packages from the pre built archives | 1 |
69,894 | 3,316,293,629 | IssuesEvent | 2015-11-06 16:18:39 | TeselaGen/ve | https://api.github.com/repos/TeselaGen/ve | closed | User Manager - Restrict usernames to valid Dow usernames | Customer: DAS Phase I Priority: Medium Type: Enhancement | When creating a new user using the user manager tab in the admin console, it would be nice to have it query LDAP and confirm that the username exists. | 1.0 | User Manager - Restrict usernames to valid Dow usernames - When creating a new user using the user manager tab in the admin console, it would be nice to have it query LDAP and confirm that the username exists. | priority | user manager restrict usernames to valid dow usernames when creating a new user using the user manager tab in the admin console it would be nice to have it query ldap and confirm that the username exists | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.