Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
337,916 | 10,221,001,085 | IssuesEvent | 2019-08-15 23:27:42 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | IRS Tables only showing 20 rows | Priority: High bug has pr | For some reason the tables in IRS plan list page, IRS Plan Jurisdiction selection page, and IRS Finalized Plan page only render 20 rows of data. | 1.0 | IRS Tables only showing 20 rows - For some reason the tables in IRS plan list page, IRS Plan Jurisdiction selection page, and IRS Finalized Plan page only render 20 rows of data. | priority | irs tables only showing rows for some reason the tables in irs plan list page irs plan jurisdiction selection page and irs finalized plan page only render rows of data | 1 |
118,267 | 4,733,613,020 | IssuesEvent | 2016-10-19 11:48:39 | armadito/armadito-glpi | https://api.github.com/repos/armadito/armadito-glpi | closed | Add massive action for adding a new Job for a selected computer | feature high priority | For example, create a new scan for selected computers.
Check if selected computer is well associated with an armadito-agent id | 1.0 | Add massive action for adding a new Job for a selected computer - For example, create a new scan for selected computers.
Check if selected computer is well associated with an armadito-agent id | priority | add massive action for adding a new job for a selected computer for example create a new scan for selected computers check if selected computer is well associated with an armadito agent id | 1 |
207,984 | 7,134,969,491 | IssuesEvent | 2018-01-22 22:47:18 | noavish/24event | https://api.github.com/repos/noavish/24event | opened | Create routes | high-priority server-side | - [ ] main page
- [ ] add event
- [ ] delete event
- [ ] join event
- [ ] search event | 1.0 | Create routes - - [ ] main page
- [ ] add event
- [ ] delete event
- [ ] join event
- [ ] search event | priority | create routes main page add event delete event join event search event | 1 |
504,828 | 14,622,568,602 | IssuesEvent | 2020-12-23 00:40:05 | BenJeau/react-native-draw | https://api.github.com/repos/BenJeau/react-native-draw | closed | Be able to draw dots | bug priority: high | I think this is related to my implementation of creating the SVG path strings, in the following function
https://github.com/BenJeau/react-native-draw/blob/f87103fc126f2773557c3f6d577ef8463b0316c8/src/utils/svg.ts#L3-L10
I could either
- modify this function to cover edge cases
- implement #2 with a way that takes this in mind | 1.0 | Be able to draw dots - I think this is related to my implementation of creating the SVG path strings, in the following function
https://github.com/BenJeau/react-native-draw/blob/f87103fc126f2773557c3f6d577ef8463b0316c8/src/utils/svg.ts#L3-L10
I could either
- modify this function to cover edge cases
- implement #2 with a way that takes this in mind | priority | be able to draw dots i think this is related to my implementation of creating the svg path strings in the following function i could either modify this function to cover edge cases implement with a way that takes this in mind | 1 |
66,655 | 3,256,934,564 | IssuesEvent | 2015-10-20 15:44:30 | ceylon/ceylon.language | https://api.github.com/repos/ceylon/ceylon.language | closed | confusing error message for parseFloat() failures | BUG high priority | `parseFloat(2.7182818284590452354)` on the JVM results in:
```
ceylon compile-dart: 8736074210880900738 cannot be coerced into a 64 bit floating point value
ceylon.language.OverflowException "8736074210880900738 cannot be coerced into a 64 bit floating point value"
at ceylon.language.Integer.getFloat(Integer.java:442)
at ceylon.language.parseFloat_.parseFloat(parseFloat.ceylon:86)
```
which complicates troubleshooting (trying to find source data that aligns with `8736074210880900738`). I'm guessing this is an unintended consequence of https://github.com/ceylon/ceylon.language/pull/756.
cc @ePaul
| 1.0 | confusing error message for parseFloat() failures - `parseFloat(2.7182818284590452354)` on the JVM results in:
```
ceylon compile-dart: 8736074210880900738 cannot be coerced into a 64 bit floating point value
ceylon.language.OverflowException "8736074210880900738 cannot be coerced into a 64 bit floating point value"
at ceylon.language.Integer.getFloat(Integer.java:442)
at ceylon.language.parseFloat_.parseFloat(parseFloat.ceylon:86)
```
which complicates troubleshooting (trying to find source data that aligns with `8736074210880900738`). I'm guessing this is an unintended consequence of https://github.com/ceylon/ceylon.language/pull/756.
cc @ePaul
| priority | confusing error message for parsefloat failures parsefloat on the jvm results in ceylon compile dart cannot be coerced into a bit floating point value ceylon language overflowexception cannot be coerced into a bit floating point value at ceylon language integer getfloat integer java at ceylon language parsefloat parsefloat parsefloat ceylon which complicates troubleshooting trying to find source data that aligns with i m guessing this is an unintended consequence of cc epaul | 1 |
181,369 | 6,659,063,332 | IssuesEvent | 2017-10-01 05:23:05 | WazeDev/WME-Place-Harmonizer | https://api.github.com/repos/WazeDev/WME-Place-Harmonizer | closed | Ignore duplicate matching names inside parentheses | Bug: Mild Priority: High | For instance... **Starbucks (inside Target)** should not flag a duplicate if a Target is nearby. | 1.0 | Ignore duplicate matching names inside parentheses - For instance... **Starbucks (inside Target)** should not flag a duplicate if a Target is nearby. | priority | ignore duplicate matching names inside parentheses for instance starbucks inside target should not flag a duplicate if a target is nearby | 1 |
689,373 | 23,618,180,980 | IssuesEvent | 2022-08-24 17:48:57 | ATTPC/ATTPCROOTv2 | https://api.github.com/repos/ATTPC/ATTPCROOTv2 | opened | Covariance matrix of initial cluster | high priority | The initial track cluster used for fitting has an undefined covariance matrix. This might be affecting the fitting perfomance. | 1.0 | Covariance matrix of initial cluster - The initial track cluster used for fitting has an undefined covariance matrix. This might be affecting the fitting perfomance. | priority | covariance matrix of initial cluster the initial track cluster used for fitting has an undefined covariance matrix this might be affecting the fitting perfomance | 1 |
107,405 | 4,307,979,440 | IssuesEvent | 2016-07-21 11:07:41 | laurencedawson/reddit-sync-development | https://api.github.com/repos/laurencedawson/reddit-sync-development | closed | Load more than 9 child comments | bug High priority | [https://www.reddit.com/r/circlejerk/comments/2nuwma/an_ice_cube_after_15_minutes_on_my_keychain/cmh55tf](https://www.reddit.com/r/circlejerk/comments/2nuwma/an_ice_cube_after_15_minutes_on_my_keychain/cmh55tf)
Clicking on load more in the above thread do not bring up any further child comments | 1.0 | Load more than 9 child comments - [https://www.reddit.com/r/circlejerk/comments/2nuwma/an_ice_cube_after_15_minutes_on_my_keychain/cmh55tf](https://www.reddit.com/r/circlejerk/comments/2nuwma/an_ice_cube_after_15_minutes_on_my_keychain/cmh55tf)
Clicking on load more in the above thread do not bring up any further child comments | priority | load more than child comments clicking on load more in the above thread do not bring up any further child comments | 1 |
565,897 | 16,772,289,198 | IssuesEvent | 2021-06-14 16:08:22 | tomvothecoder/xcdat | https://api.github.com/repos/tomvothecoder/xcdat | opened | Analyze regridding requirements and third-party libraries | Priority: High | This will help us determine the complexity and time estimate for [horizontal](#44) and [vertical](#45) regridding.
1) Analyze our regridding requirements
2) Determine if existing third-party libraries are sufficient in meeting requirements
a. If sufficient, should we include or not include library in dependencies? Should users just install them separately in their environment?
b. If not sufficient, should we extend library or write regridder from scratch? | 1.0 | Analyze regridding requirements and third-party libraries - This will help us determine the complexity and time estimate for [horizontal](#44) and [vertical](#45) regridding.
1) Analyze our regridding requirements
2) Determine if existing third-party libraries are sufficient in meeting requirements
a. If sufficient, should we include or not include library in dependencies? Should users just install them separately in their environment?
b. If not sufficient, should we extend library or write regridder from scratch? | priority | analyze regridding requirements and third party libraries this will help us determine the complexity and time estimate for and regridding analyze our regridding requirements determine if existing third party libraries are sufficient in meeting requirements a if sufficient should we include or not include library in dependencies should users just install them separately in their environment b if not sufficient should we extend library or write regridder from scratch | 1 |
533,628 | 15,595,383,873 | IssuesEvent | 2021-03-18 14:50:24 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | BCC Data not Showing on Thailand Production Monitor Page | Priority: High | The plan `A1 เขาแก้ว (2207010601) 2021-02-25` ID: `7dbf2406-1191-5c5b-80d5-da2328fe3f33` has data submitted on the android device and marked as synced to the server. Checks on the OpenSRP server confirms that that data has synced. However, the plan does not show the BCC data on the WebUI under the Monitor page [here](https://mhealth.ddc.moph.go.th/focus-investigation/map/301d72b4-0946-50ea-8897-3cf4ff0acea6).
The same issue applied to the plan `A1 ผักกาด (2204100301) 2021-02-27` ID: `5ba50d6b-4687-5a5a-84a8-3cd846340eed` accessible [here](https://mhealth.ddc.moph.go.th/focus-investigation/map/1a9ccc6b-38e3-5b03-82ec-2172fc1e3ced).
| 1.0 | BCC Data not Showing on Thailand Production Monitor Page - The plan `A1 เขาแก้ว (2207010601) 2021-02-25` ID: `7dbf2406-1191-5c5b-80d5-da2328fe3f33` has data submitted on the android device and marked as synced to the server. Checks on the OpenSRP server confirms that that data has synced. However, the plan does not show the BCC data on the WebUI under the Monitor page [here](https://mhealth.ddc.moph.go.th/focus-investigation/map/301d72b4-0946-50ea-8897-3cf4ff0acea6).
The same issue applied to the plan `A1 ผักกาด (2204100301) 2021-02-27` ID: `5ba50d6b-4687-5a5a-84a8-3cd846340eed` accessible [here](https://mhealth.ddc.moph.go.th/focus-investigation/map/1a9ccc6b-38e3-5b03-82ec-2172fc1e3ced).
| priority | bcc data not showing on thailand production monitor page the plan เขาแก้ว id has data submitted on the android device and marked as synced to the server checks on the opensrp server confirms that that data has synced however the plan does not show the bcc data on the webui under the monitor page the same issue applied to the plan ผักกาด id accessible | 1 |
283,601 | 8,721,128,809 | IssuesEvent | 2018-12-08 19:44:49 | SunwellWoW/Sunwell-TBC-Bugtracker | https://api.github.com/repos/SunwellWoW/Sunwell-TBC-Bugtracker | closed | Quest - City of Light | High Priority duplicate |
Quest still wont complete, tried twice now... both times the npc despawns randomly right after the Scryers section.
| 1.0 | Quest - City of Light -
Quest still wont complete, tried twice now... both times the npc despawns randomly right after the Scryers section.
| priority | quest city of light quest still wont complete tried twice now both times the npc despawns randomly right after the scryers section | 1 |
163,557 | 6,200,778,594 | IssuesEvent | 2017-07-06 02:43:11 | chrisblakley/Nebula | https://api.github.com/repos/chrisblakley/Nebula | closed | Theme update email version numbers are backwards | Backend (Server) Bug High Priority Parent / Child Theme | The theme update email version number is backwards for some reason...
```
The parent Nebula theme has been updated from version 5.0.27 (Committed: May 27, 2017) to 5.0.24 for...
``` | 1.0 | Theme update email version numbers are backwards - The theme update email version number is backwards for some reason...
```
The parent Nebula theme has been updated from version 5.0.27 (Committed: May 27, 2017) to 5.0.24 for...
``` | priority | theme update email version numbers are backwards the theme update email version number is backwards for some reason the parent nebula theme has been updated from version committed may to for | 1 |
550,225 | 16,107,256,395 | IssuesEvent | 2021-04-27 16:19:05 | TEAM-SUITS/Suits | https://api.github.com/repos/TEAM-SUITS/Suits | closed | feat: 답변에 대한 좋아요 기능 구현 | :orange_circle: Priority: High :sparkling_heart: feature | # ✨Feature Request
## 답변에 대한 좋아요 기능 구현
- 답변 좋아요 클릭 시 좋아요 상태를 업데이트하는 기능 구현 필요
| 1.0 | feat: 답변에 대한 좋아요 기능 구현 - # ✨Feature Request
## 답변에 대한 좋아요 기능 구현
- 답변 좋아요 클릭 시 좋아요 상태를 업데이트하는 기능 구현 필요
| priority | feat 답변에 대한 좋아요 기능 구현 ✨feature request 답변에 대한 좋아요 기능 구현 답변 좋아요 클릭 시 좋아요 상태를 업데이트하는 기능 구현 필요 | 1 |
97,551 | 3,995,592,930 | IssuesEvent | 2016-05-10 15:58:09 | WikiEducationFoundation/WikiEduDashboard | https://api.github.com/repos/WikiEducationFoundation/WikiEduDashboard | closed | dashboard.wikiedu.org interactions with wikimedia API is extremely slow | high-importance bug top priority WINTR | As of this morning, it takes longer than 30 seconds to do queries against the MediaWiki. This means that login and editing actions on the dashboard time out.
I'm able to get data back *eventually* when making queries to Commons or Wikipedia on the rails console, but it takes a very long time. Staging and other instances are not affected, just dashboard.wikiedu.org. | 1.0 | dashboard.wikiedu.org interactions with wikimedia API is extremely slow - As of this morning, it takes longer than 30 seconds to do queries against the MediaWiki. This means that login and editing actions on the dashboard time out.
I'm able to get data back *eventually* when making queries to Commons or Wikipedia on the rails console, but it takes a very long time. Staging and other instances are not affected, just dashboard.wikiedu.org. | priority | dashboard wikiedu org interactions with wikimedia api is extremely slow as of this morning it takes longer than seconds to do queries against the mediawiki this means that login and editing actions on the dashboard time out i m able to get data back eventually when making queries to commons or wikipedia on the rails console but it takes a very long time staging and other instances are not affected just dashboard wikiedu org | 1 |
641,915 | 20,844,052,813 | IssuesEvent | 2022-03-21 06:20:27 | AlphaWallet/alpha-wallet-ios | https://api.github.com/repos/AlphaWallet/alpha-wallet-ios | opened | Freeze with grayed out screen after sending ETH on Ropsten successfully | High Priority | This is right after the actionsheet is closed. The transaction was sent successfully (confirmed in Etherscan).

| 1.0 | Freeze with grayed out screen after sending ETH on Ropsten successfully - This is right after the actionsheet is closed. The transaction was sent successfully (confirmed in Etherscan).

| priority | freeze with grayed out screen after sending eth on ropsten successfully this is right after the actionsheet is closed the transaction was sent successfully confirmed in etherscan | 1 |
316,737 | 9,654,309,222 | IssuesEvent | 2019-05-19 13:06:29 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | [Dungeon] Deadmines -loot table- (issue 2) | Dungeon/Raid Exploit Priority-High | **Links:**
http://i66.tinypic.com/20nor5.jpg
http://i63.tinypic.com/2hi9qig.jpg
http://i65.tinypic.com/2dih9n7.jpg
http://i64.tinypic.com/28a3qti.jpg
**What is happening:**
The problem is all the bosses from Deadmines (lvl 15 one) drop Justice Points
**What should happen:**
Its a low level dungeon and it should not drop Justice Points | 1.0 | [Dungeon] Deadmines -loot table- (issue 2) - **Links:**
http://i66.tinypic.com/20nor5.jpg
http://i63.tinypic.com/2hi9qig.jpg
http://i65.tinypic.com/2dih9n7.jpg
http://i64.tinypic.com/28a3qti.jpg
**What is happening:**
The problem is all the bosses from Deadmines (lvl 15 one) drop Justice Points
**What should happen:**
Its a low level dungeon and it should not drop Justice Points | priority | deadmines loot table issue links what is happening the problem is all the bosses from deadmines lvl one drop justice points what should happen its a low level dungeon and it should not drop justice points | 1 |
276,506 | 8,599,481,732 | IssuesEvent | 2018-11-16 02:20:14 | QuantEcon/lecture-source-jl | https://api.github.com/repos/QuantEcon/lecture-source-jl | opened | Remaining Slate of Changes | high-priority | A lot of the checklists and such were out of date so I figured I would reconsolidate. This binds for the non-Jupinx/build changes.
### Interpolations
The only thing missing is the use of `Dierckx` in `amss`, which @Nosferican is rewriting in #159.
### Fixed Point and NLsolve
We use `compute_fixed_point` in `career, ifp, odu`. Xiaojun was having trouble converting these over. @Nosferican and I said we'd take a look --- I'll assign myself to this while he's working on the `amss`, etc.
### Expectations
Done all the ones we can without a major rewrite. The last thing we might do is replace the two instances of `do_quad` in `odu`.
### Structs and Mutable Structs
Mutable Structs:
- [ ] `amss` (#159)
- [ ] `odu`
- [ ] `opt_tax_recur`
- [ ] `uncertainty_traps`
- [ ] `lake_model` (almost done)
Structs:
- [ ] `amss` (#159)
- [ ] `hist_dep_policies`
- [ ] `jv`
- [ ] `lqramsey`
- [ ] `odu`
- [ ] `opt_tax_recur`
- [ ] `lake_model` (above)
I took out the type parameters etc. from a lot of these, just need to rip them out.
### Plots
We still call `pyplot()` in
- [ ] `linear_models`
- [ ] `hist_dep_policies`
- [ ] `amss` (#159)
| 1.0 | Remaining Slate of Changes - A lot of the checklists and such were out of date so I figured I would reconsolidate. This binds for the non-Jupinx/build changes.
### Interpolations
The only thing missing is the use of `Dierckx` in `amss`, which @Nosferican is rewriting in #159.
### Fixed Point and NLsolve
We use `compute_fixed_point` in `career, ifp, odu`. Xiaojun was having trouble converting these over. @Nosferican and I said we'd take a look --- I'll assign myself to this while he's working on the `amss`, etc.
### Expectations
Done all the ones we can without a major rewrite. The last thing we might do is replace the two instances of `do_quad` in `odu`.
### Structs and Mutable Structs
Mutable Structs:
- [ ] `amss` (#159)
- [ ] `odu`
- [ ] `opt_tax_recur`
- [ ] `uncertainty_traps`
- [ ] `lake_model` (almost done)
Structs:
- [ ] `amss` (#159)
- [ ] `hist_dep_policies`
- [ ] `jv`
- [ ] `lqramsey`
- [ ] `odu`
- [ ] `opt_tax_recur`
- [ ] `lake_model` (above)
I took out the type parameters etc. from a lot of these, just need to rip them out.
### Plots
We still call `pyplot()` in
- [ ] `linear_models`
- [ ] `hist_dep_policies`
- [ ] `amss` (#159)
| priority | remaining slate of changes a lot of the checklists and such were out of date so i figured i would reconsolidate this binds for the non jupinx build changes interpolations the only thing missing is the use of dierckx in amss which nosferican is rewriting in fixed point and nlsolve we use compute fixed point in career ifp odu xiaojun was having trouble converting these over nosferican and i said we d take a look i ll assign myself to this while he s working on the amss etc expectations done all the ones we can without a major rewrite the last thing we might do is replace the two instances of do quad in odu structs and mutable structs mutable structs amss odu opt tax recur uncertainty traps lake model almost done structs amss hist dep policies jv lqramsey odu opt tax recur lake model above i took out the type parameters etc from a lot of these just need to rip them out plots we still call pyplot in linear models hist dep policies amss | 1 |
304,882 | 9,337,168,364 | IssuesEvent | 2019-03-28 23:52:41 | bcgov/ols-router | https://api.github.com/repos/bcgov/ols-router | closed | in route planner, add a resource called ping that checks node health | api enhancement functional route planner high priority | @mraross commented on [Fri Jun 01 2018](https://github.com/bcgov/api-specs/issues/336)
The resource should perform a simple distance along road network calculation between two known points and return an HTTP 200 response code with no body.
Example:
router.api.gov.bc.ca/ping
ping will be used by a watchdog timer in the API Gateway to determine node health.
| 1.0 | in route planner, add a resource called ping that checks node health - @mraross commented on [Fri Jun 01 2018](https://github.com/bcgov/api-specs/issues/336)
The resource should perform a simple distance along road network calculation between two known points and return an HTTP 200 response code with no body.
Example:
router.api.gov.bc.ca/ping
ping will be used by a watchdog timer in the API Gateway to determine node health.
| priority | in route planner add a resource called ping that checks node health mraross commented on the resource should perform a simple distance along road network calculation between two known points and return an http response code with no body example router api gov bc ca ping ping will be used by a watchdog timer in the api gateway to determine node health | 1 |
646,182 | 21,040,133,388 | IssuesEvent | 2022-03-31 11:32:48 | AY2122S2-CS2103T-T12-1/tp | https://api.github.com/repos/AY2122S2-CS2103T-T12-1/tp | closed | Batch update for student activities | type.AdvancedFeatures priority.High type.Enhancement v1.3 | Current implementation is only for `ClassCode`, v1.3 needs for activities as well | 1.0 | Batch update for student activities - Current implementation is only for `ClassCode`, v1.3 needs for activities as well | priority | batch update for student activities current implementation is only for classcode needs for activities as well | 1 |
563,292 | 16,679,412,057 | IssuesEvent | 2021-06-07 20:49:12 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [studio] Create site and site listing fail due to missing publish_status permission | bug priority: high | ## Describe the bug
Site creation and site listing fail due to missing `publish_status` permission.
## To Reproduce
Steps to reproduce the behavior:
1. Create a site from Editorial BP
2. Watch the logs and the site listing screen
## Expected behavior
Site creation and site listing should work. Additionally, we should consider that `publish_status` is a must permission for those viewing the site listing.
## Screenshots

## Logs
```logs
[ERROR] 2021-06-07T16:46:49,863 [http-nio-8080-exec-3] [v2.ExceptionHandlers] | API endpoint http://localhost:8080/studio/api/2/publish/status?siteId=ed failed with response: ApiResponse{code=2001, message='Unauthorized', remedialAction='You don't have permission to perform this task, please contact your administrator', documentationUrl=''}
org.craftercms.commons.security.exception.ActionDeniedException: Current subject does not have permission to execute action "publish_status" on {siteId=ed}
```
## Specs
### Version
4.0.0-SNAPSHOT on 6/7/2021
### OS
Linux
### Browser
Chrome
## Additional context
N/A | 1.0 | [studio] Create site and site listing fail due to missing publish_status permission - ## Describe the bug
Site creation and site listing fail due to missing `publish_status` permission.
## To Reproduce
Steps to reproduce the behavior:
1. Create a site from Editorial BP
2. Watch the logs and the site listing screen
## Expected behavior
Site creation and site listing should work. Additionally, we should consider that `publish_status` is a must permission for those viewing the site listing.
## Screenshots

## Logs
```logs
[ERROR] 2021-06-07T16:46:49,863 [http-nio-8080-exec-3] [v2.ExceptionHandlers] | API endpoint http://localhost:8080/studio/api/2/publish/status?siteId=ed failed with response: ApiResponse{code=2001, message='Unauthorized', remedialAction='You don't have permission to perform this task, please contact your administrator', documentationUrl=''}
org.craftercms.commons.security.exception.ActionDeniedException: Current subject does not have permission to execute action "publish_status" on {siteId=ed}
```
## Specs
### Version
4.0.0-SNAPSHOT on 6/7/2021
### OS
Linux
### Browser
Chrome
## Additional context
N/A | priority | create site and site listing fail due to missing publish status permission describe the bug site creation and site listing fail due to missing publish status permission to reproduce steps to reproduce the behavior create a site from editorial bp watch the logs and the site listing screen expected behavior site creation and site listing should work additionally we should consider that publish status is a must permission for those viewing the site listing screenshots logs logs api endpoint failed with response apiresponse code message unauthorized remedialaction you don t have permission to perform this task please contact your administrator documentationurl org craftercms commons security exception actiondeniedexception current subject does not have permission to execute action publish status on siteid ed specs version snapshot on os linux browser chrome additional context n a | 1 |
227,073 | 7,526,636,058 | IssuesEvent | 2018-04-13 14:35:25 | USGCRP/gcis | https://api.github.com/repos/USGCRP/gcis | closed | CSSR Postmortem | priority high type question | Review how CSSR went. Talk about what went well, and what we wish had gone better. Figure out what we can do to improve those things before we do this again for NCA4! | 1.0 | CSSR Postmortem - Review how CSSR went. Talk about what went well, and what we wish had gone better. Figure out what we can do to improve those things before we do this again for NCA4! | priority | cssr postmortem review how cssr went talk about what went well and what we wish had gone better figure out what we can do to improve those things before we do this again for | 1 |
760,049 | 26,625,727,329 | IssuesEvent | 2023-01-24 14:26:10 | ITISFoundation/osparc-simcore | https://api.github.com/repos/ITISFoundation/osparc-simcore | opened | Pulling images not always works as expected | bug High Priority | I have a new machine and just published s4l. Pulling the service failed 2/3 times with the following error.
```
WARNING: [2023-01-24 13:59:30,813/MainProcess] [servicelib.long_running_tasks._task:get_task_result_old(257)] - Task simcore_service_dynamic_sidecar.modules.long_running_tasks.task_create_service_containers.68c5562c-a737-417a-8cb5-449637b8e784 finished with error: Task simcore_service_dynamic_sidecar.modules.long_running_tasks.task_create_service_containers.68c5562c-a737-417a-8cb5-449637b8e784 finished with exception: ''93dac263047b''
File "/home/scu/.venv/lib/python3.9/site-packages/servicelib/long_running_tasks/_task.py", line 414, in _progress_task
return await handler(progress, **task_kwargs)
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/modules/long_running_tasks.py", line 119, in task_create_service_containers
await docker_compose_pull(app, shared_store.compose_spec)
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_compose_utils.py", line 110, in docker_compose_pull
await pull_images(list_of_images, registry_settings, _progress_cb, _log_cb)
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_utils.py", line 82, in pull_images
await asyncio.gather(
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_utils.py", line 191, in _pull_image_with_progress
if _parse_docker_pull_progress(
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_utils.py", line 129, in _parse_docker_pull_progress
_, layer_total_size = all_image_pulling_data[image_name][layer_id]
```
Feels like a concurrency issue to me, but I might be wrong. I see you are sharing an object. | 1.0 | Pulling images not always works as expected - I have a new machine and just published s4l. Pulling the service failed 2/3 times with the following error.
```
WARNING: [2023-01-24 13:59:30,813/MainProcess] [servicelib.long_running_tasks._task:get_task_result_old(257)] - Task simcore_service_dynamic_sidecar.modules.long_running_tasks.task_create_service_containers.68c5562c-a737-417a-8cb5-449637b8e784 finished with error: Task simcore_service_dynamic_sidecar.modules.long_running_tasks.task_create_service_containers.68c5562c-a737-417a-8cb5-449637b8e784 finished with exception: ''93dac263047b''
File "/home/scu/.venv/lib/python3.9/site-packages/servicelib/long_running_tasks/_task.py", line 414, in _progress_task
return await handler(progress, **task_kwargs)
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/modules/long_running_tasks.py", line 119, in task_create_service_containers
await docker_compose_pull(app, shared_store.compose_spec)
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_compose_utils.py", line 110, in docker_compose_pull
await pull_images(list_of_images, registry_settings, _progress_cb, _log_cb)
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_utils.py", line 82, in pull_images
await asyncio.gather(
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_utils.py", line 191, in _pull_image_with_progress
if _parse_docker_pull_progress(
dy-sidecar_51fb7bc0-6141-46e9-a873-f0be3f6aacea.1.w53j60bwxpkk@testmachine2 |
File "/home/scu/.venv/lib/python3.9/site-packages/simcore_service_dynamic_sidecar/core/docker_utils.py", line 129, in _parse_docker_pull_progress
_, layer_total_size = all_image_pulling_data[image_name][layer_id]
```
Feels like a concurrency issue to me, but I might be wrong. I see you are sharing an object. | priority | pulling images not always works as expected i have a new machine and just published pulling the service failed times with the following error warning task simcore service dynamic sidecar modules long running tasks task create service containers finished with error task simcore service dynamic sidecar modules long running tasks task create service containers finished with exception file home scu venv lib site packages servicelib long running tasks task py line in progress task return await handler progress task kwargs dy sidecar file home scu venv lib site packages simcore service dynamic sidecar modules long running tasks py line in task create service containers await docker compose pull app shared store compose spec dy sidecar file home scu venv lib site packages simcore service dynamic sidecar core docker compose utils py line in docker compose pull await pull images list of images registry settings progress cb log cb dy sidecar file home scu venv lib site packages simcore service dynamic sidecar core docker utils py line in pull images await asyncio gather dy sidecar file home scu venv lib site packages simcore service dynamic sidecar core docker utils py line in pull image with progress if parse docker pull progress dy sidecar file home scu venv lib site packages simcore service dynamic sidecar core docker utils py line in parse docker pull progress layer total size all image pulling data feels like a concurrency issue to me but i might be wrong i see you are sharing an object | 1 |
454,894 | 13,108,984,748 | IssuesEvent | 2020-08-04 17:50:17 | digital-dream-labs/vector-web-setup | https://api.github.com/repos/digital-dream-labs/vector-web-setup | closed | Fix usage string on unconfigured projects | QA Ready bug priority - high | When I run `vector-web-setup serve` without doing anything, I get this:
```
Seems like you have missed this step 'configure'!
E.g. 'npm run vector-setup configure'
```
The command listed is wrong now. It should be `vector-web-setup configure` | 1.0 | Fix usage string on unconfigured projects - When I run `vector-web-setup serve` without doing anything, I get this:
```
Seems like you have missed this step 'configure'!
E.g. 'npm run vector-setup configure'
```
The command listed is wrong now. It should be `vector-web-setup configure` | priority | fix usage string on unconfigured projects when i run vector web setup serve without doing anything i get this seems like you have missed this step configure e g npm run vector setup configure the command listed is wrong now it should be vector web setup configure | 1 |
191,015 | 6,824,830,209 | IssuesEvent | 2017-11-08 08:16:58 | xcodeswift/xcproj | https://api.github.com/repos/xcodeswift/xcproj | opened | Wrong build phase name | difficulty:easy good first issue priority:high status:ready-development type:bug | ## Context 🕵️♀️
I opened a project with `xcproj` and after saving it, some build files that belonged to a copy files build phase had the wrong comment.
## What 🌱
```
// Before
04EDB7B51F3B446100411A92 /* Features.framework in Embed Local Frameworks */ = {isa = PBXBuildFile; fileRef = 23616E711DD1CE9700250513 /* Features.framework */; settings = {ATTRIBUTES = (CodeSignOnCopy, RemoveHeadersOnCopy, ); }; };
// After
04EDB7B51F3B446100411A92 /* Features.framework in CopyFiles */ = {isa = PBXBuildFile; fileRef = 23616E711DD1CE9700250513 /* Features.framework */; settings = {ATTRIBUTES = (CodeSignOnCopy, RemoveHeadersOnCopy, ); }; };
```
**Notice the `Features.framework` in CopyFiles` comment. `CopyFiles` should be the build phase name.**
## Proposal 🎉
Update `PBXBuildFile` to generate the right comment.
<!-- Love xcproj? Please consider supporting our collective:
👉 https://opencollective.com/xcproj/donate --> | 1.0 | Wrong build phase name - ## Context 🕵️♀️
I opened a project with `xcproj` and after saving it, some build files that belonged to a copy files build phase had the wrong comment.
## What 🌱
```
// Before
04EDB7B51F3B446100411A92 /* Features.framework in Embed Local Frameworks */ = {isa = PBXBuildFile; fileRef = 23616E711DD1CE9700250513 /* Features.framework */; settings = {ATTRIBUTES = (CodeSignOnCopy, RemoveHeadersOnCopy, ); }; };
// After
04EDB7B51F3B446100411A92 /* Features.framework in CopyFiles */ = {isa = PBXBuildFile; fileRef = 23616E711DD1CE9700250513 /* Features.framework */; settings = {ATTRIBUTES = (CodeSignOnCopy, RemoveHeadersOnCopy, ); }; };
```
**Notice the `Features.framework` in CopyFiles` comment. `CopyFiles` should be the build phase name.**
## Proposal 🎉
Update `PBXBuildFile` to generate the right comment.
<!-- Love xcproj? Please consider supporting our collective:
👉 https://opencollective.com/xcproj/donate --> | priority | wrong build phase name context 🕵️♀️ i opened a project with xcproj and after saving it some build files that belonged to a copy files build phase had the wrong comment what 🌱 before features framework in embed local frameworks isa pbxbuildfile fileref features framework settings attributes codesignoncopy removeheadersoncopy after features framework in copyfiles isa pbxbuildfile fileref features framework settings attributes codesignoncopy removeheadersoncopy notice the features framework in copyfiles comment copyfiles should be the build phase name proposal 🎉 update pbxbuildfile to generate the right comment love xcproj please consider supporting our collective 👉 | 1 |
210,636 | 7,191,774,417 | IssuesEvent | 2018-02-02 22:27:19 | OpenEnergyDashboard/OED | https://api.github.com/repos/OpenEnergyDashboard/OED | closed | colors on compare day chart | bug high priority | The colors for the Day comparison is different from the other two choices. I think this one is wrong. First, the projected usage is not lighter for today. Second, yesterday and today actual usage to that time are not the same color. Note, the color for yesterday are reversed from the other compares and for today the projected is the color for actual in the other charts. | 1.0 | colors on compare day chart - The colors for the Day comparison is different from the other two choices. I think this one is wrong. First, the projected usage is not lighter for today. Second, yesterday and today actual usage to that time are not the same color. Note, the color for yesterday are reversed from the other compares and for today the projected is the color for actual in the other charts. | priority | colors on compare day chart the colors for the day comparison is different from the other two choices i think this one is wrong first the projected usage is not lighter for today second yesterday and today actual usage to that time are not the same color note the color for yesterday are reversed from the other compares and for today the projected is the color for actual in the other charts | 1 |
646,076 | 21,036,636,478 | IssuesEvent | 2022-03-31 08:28:29 | AY2122S2-CS2103-F11-2/tp | https://api.github.com/repos/AY2122S2-CS2103-F11-2/tp | closed | Fix profile picture not showing for lowercase letters | type.Bug priority.High | Currently the profile picture in the focus card crashes when user enters a name with lower case.
Let's add a check to enable profile picture to be shown for names with lower case. | 1.0 | Fix profile picture not showing for lowercase letters - Currently the profile picture in the focus card crashes when user enters a name with lower case.
Let's add a check to enable profile picture to be shown for names with lower case. | priority | fix profile picture not showing for lowercase letters currently the profile picture in the focus card crashes when user enters a name with lower case let s add a check to enable profile picture to be shown for names with lower case | 1 |
771,911 | 27,098,795,354 | IssuesEvent | 2023-02-15 06:39:55 | xKDR/Survey.jl | https://api.github.com/repos/xKDR/Survey.jl | closed | Registration of package in Julia Repository | high priority | This thread to start the process of registeration of `Survey.jl`.
@ayushpatnaikgit you said that `Survey.jl` is very close to existing package [`Curves.jl`](https://juliapackages.com/p/curves).
Lets proceed to registration? | 1.0 | Registration of package in Julia Repository - This thread to start the process of registeration of `Survey.jl`.
@ayushpatnaikgit you said that `Survey.jl` is very close to existing package [`Curves.jl`](https://juliapackages.com/p/curves).
Lets proceed to registration? | priority | registration of package in julia repository this thread to start the process of registeration of survey jl ayushpatnaikgit you said that survey jl is very close to existing package lets proceed to registration | 1 |
53,705 | 3,044,228,274 | IssuesEvent | 2015-08-10 07:51:25 | UnifiedViews/Core | https://api.github.com/repos/UnifiedViews/Core | closed | Description field is always empty for root DPU templates | priority: High resolution: fixed severity: bug | After https://github.com/UnifiedViews/Core/pull/491 is accepted, name and description of root DPU templates (second level items in the DPU template tree, just under "Extractors", "Transformers", and "Loader") cannot be changed. Thus, for every root DPU template, the general tab looks as follows:
<img width="1410" alt="screen shot 2015-08-06 at 11 12 22" src="https://cloud.githubusercontent.com/assets/3014917/9108490/d42812e0-3c2e-11e5-8822-3f37daa99cee.png">
Description is taken from pom.xml and put to "Description of JAR". So "Description" field is always empty for root DPU templates (2nd level items in the DPU tree) which is a bit confusing.
Proposed solution:
1) the description field should contain fixed text "See DPU Template Configuration, About tab". Such text should be localized for SK version (with the corresponding names of the tabs for SK version).
| 1.0 | Description field is always empty for root DPU templates - After https://github.com/UnifiedViews/Core/pull/491 is accepted, name and description of root DPU templates (second level items in the DPU template tree, just under "Extractors", "Transformers", and "Loader") cannot be changed. Thus, for every root DPU template, the general tab looks as follows:
<img width="1410" alt="screen shot 2015-08-06 at 11 12 22" src="https://cloud.githubusercontent.com/assets/3014917/9108490/d42812e0-3c2e-11e5-8822-3f37daa99cee.png">
Description is taken from pom.xml and put to "Description of JAR". So "Description" field is always empty for root DPU templates (2nd level items in the DPU tree) which is a bit confusing.
Proposed solution:
1) the description field should contain fixed text "See DPU Template Configuration, About tab". Such text should be localized for SK version (with the corresponding names of the tabs for SK version).
| priority | description field is always empty for root dpu templates after is accepted name and description of root dpu templates second level items in the dpu template tree just under extractors transformers and loader cannot be changed thus for every root dpu template the general tab looks as follows img width alt screen shot at src description is taken from pom xml and put to description of jar so description field is always empty for root dpu templates level items in the dpu tree which is a bit confusing proposed solution the description field should contain fixed text see dpu template configuration about tab such text should be localized for sk version with the corresponding names of the tabs for sk version | 1 |
555,899 | 16,472,387,860 | IssuesEvent | 2021-05-23 17:19:38 | lorenzwalthert/precommit | https://api.github.com/repos/lorenzwalthert/precommit | closed | {renv} hook dependencies should be auto-updated | Complexity: Medium Priority: High Status: WIP | We can build a GitHub action to send a PR once a month. | 1.0 | {renv} hook dependencies should be auto-updated - We can build a GitHub action to send a PR once a month. | priority | renv hook dependencies should be auto updated we can build a github action to send a pr once a month | 1 |
563,061 | 16,675,514,187 | IssuesEvent | 2021-06-07 15:43:02 | 10up/ElasticPress | https://api.github.com/repos/10up/ElasticPress | closed | Image block and post_mime_type | confirmed bug high priority wip | **Describe the bug**
The Gutenberg image block adds `'post_mime_type' => 'image'` to the media modal's search queries (the `query-attachments` AJAX action). The classic editor didn't add this parameter.
ElasticPress translates post_mime_type to an exact match, but in `wp_post_mime_type_where()`, which WP_Query uses to parse that argument, if there's no `/` in the post_mime_type, the resulting SQL wildcards the second half of the mime type, so `LIKE 'image/%'` (for this specific case).
While admin-side functionality isn't the default for this plugin, updating this logic would make it more seamless.
**Steps to Reproduce**
1. Install/configure ElasticPress, Classic Editor plugin
2. Enable document indexing
3. Enable admin and AJAX via `ep_admin_wp_query_integration` and `ep_ajax_wp_query_integration`
4. Add an image to the media library
5. Create a new post
6. Edit with classic editor
7. Use the "Add Media" button to open the media modal
8. See your image
9. Use the search bar to search for your image
10. Still see your image
11. Close out. Edit a post with the block editor.
12. Add a new image block to bring up the media modal
13. See your image
14. Use the search bar to search for your image
15. See no results returned
**Expected behavior**
Image search results in the admin should be consistent.
**Environment information**
- Device: ThinkPad
- OS: Windows
- Browser and version: Firefox
- Plugins and version: ElasticPress, latest
- Theme and version: Custom
- Other installed plugin(s) and version(s): I can provide this if the above steps fail to reproduce
**Additional context**
Current workaround is a `pre_get_posts` filter:
```
public function query_attachments( &$query ) {
if( !wp_doing_ajax() || $_POST['action'] != 'query-attachments' ) {
return;
}
$query->set('post_mime_type', '');
}
``` | 1.0 | Image block and post_mime_type - **Describe the bug**
The Gutenberg image block adds `'post_mime_type' => 'image'` to the media modal's search queries (the `query-attachments` AJAX action). The classic editor didn't add this parameter.
ElasticPress translates post_mime_type to an exact match, but in `wp_post_mime_type_where()`, which WP_Query uses to parse that argument, if there's no `/` in the post_mime_type, the resulting SQL wildcards the second half of the mime type, so `LIKE 'image/%'` (for this specific case).
While admin-side functionality isn't the default for this plugin, updating this logic would make it more seamless.
**Steps to Reproduce**
1. Install/configure ElasticPress, Classic Editor plugin
2. Enable document indexing
3. Enable admin and AJAX via `ep_admin_wp_query_integration` and `ep_ajax_wp_query_integration`
4. Add an image to the media library
5. Create a new post
6. Edit with classic editor
7. Use the "Add Media" button to open the media modal
8. See your image
9. Use the search bar to search for your image
10. Still see your image
11. Close out. Edit a post with the block editor.
12. Add a new image block to bring up the media modal
13. See your image
14. Use the search bar to search for your image
15. See no results returned
**Expected behavior**
Image search results in the admin should be consistent.
**Environment information**
- Device: ThinkPad
- OS: Windows
- Browser and version: Firefox
- Plugins and version: ElasticPress, latest
- Theme and version: Custom
- Other installed plugin(s) and version(s): I can provide this if the above steps fail to reproduce
**Additional context**
Current workaround is a `pre_get_posts` filter:
```
public function query_attachments( &$query ) {
if( !wp_doing_ajax() || $_POST['action'] != 'query-attachments' ) {
return;
}
$query->set('post_mime_type', '');
}
``` | priority | image block and post mime type describe the bug the gutenberg image block adds post mime type image to the media modal s search queries the query attachments ajax action the classic editor didn t add this parameter elasticpress translates post mime type to an exact match but in wp post mime type where which wp query uses to parse that argument if there s no in the post mime type the resulting sql wildcards the second half of the mime type so like image for this specific case while admin side functionality isn t the default for this plugin updating this logic would make it more seamless steps to reproduce install configure elasticpress classic editor plugin enable document indexing enable admin and ajax via ep admin wp query integration and ep ajax wp query integration add an image to the media library create a new post edit with classic editor use the add media button to open the media modal see your image use the search bar to search for your image still see your image close out edit a post with the block editor add a new image block to bring up the media modal see your image use the search bar to search for your image see no results returned expected behavior image search results in the admin should be consistent environment information device thinkpad os windows browser and version firefox plugins and version elasticpress latest theme and version custom other installed plugin s and version s i can provide this if the above steps fail to reproduce additional context current workaround is a pre get posts filter public function query attachments query if wp doing ajax post query attachments return query set post mime type | 1 |
827,409 | 31,771,413,802 | IssuesEvent | 2023-09-12 12:03:19 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | closed | Fix moneybird pagination problem in get_or_create_project | priority: high bug moneybirdsynchronization | ### Describe the bug
https://thalia.sentry.io/issues/4111947935/?project=1463433&query=is%3Aunresolved+issue.category%3Aerror&referrer=issue-stream&statsPeriod=14d&stream_index=5
Moneybird stuff nearly always fails when getting an existing project, because only 25 projects are returned by default (iirc 100 is the max that can be set).
So we need to either handle pagination, or find out if we can filter by name in https://github.com/svthalia/concrexit/blob/d04b57585e58d692916d4a5225eeea4bfa4cde4c/website/moneybirdsynchronization/moneybird.py#L23-L25.
### How to reproduce
Steps to reproduce the behaviour:
1. Try moneybird if over 25 projects exist.
| 1.0 | Fix moneybird pagination problem in get_or_create_project - ### Describe the bug
https://thalia.sentry.io/issues/4111947935/?project=1463433&query=is%3Aunresolved+issue.category%3Aerror&referrer=issue-stream&statsPeriod=14d&stream_index=5
Moneybird stuff nearly always fails when getting an existing project, because only 25 projects are returned by default (iirc 100 is the max that can be set).
So we need to either handle pagination, or find out if we can filter by name in https://github.com/svthalia/concrexit/blob/d04b57585e58d692916d4a5225eeea4bfa4cde4c/website/moneybirdsynchronization/moneybird.py#L23-L25.
### How to reproduce
Steps to reproduce the behaviour:
1. Try moneybird if over 25 projects exist.
| priority | fix moneybird pagination problem in get or create project describe the bug moneybird stuff nearly always fails when getting an existing project because only projects are returned by default iirc is the max that can be set so we need to either handle pagination or find out if we can filter by name in how to reproduce steps to reproduce the behaviour try moneybird if over projects exist | 1 |
386,840 | 11,451,432,819 | IssuesEvent | 2020-02-06 11:35:41 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | While editing a post from WPbakery builder,there is an error in Console (Uncaught TypeError: Cannot read property 'on' of null) | NEXT UPDATE [Priority: HIGH] bug | Ref:- https://secure.helpscout.net/conversation/1065294226/108105?folderId=1060556
https://monosnap.com/file/QoVWO8Gknb5UHWVp4SOcauwuQi80fx
| 1.0 | While editing a post from WPbakery builder,there is an error in Console (Uncaught TypeError: Cannot read property 'on' of null) - Ref:- https://secure.helpscout.net/conversation/1065294226/108105?folderId=1060556
https://monosnap.com/file/QoVWO8Gknb5UHWVp4SOcauwuQi80fx
| priority | while editing a post from wpbakery builder there is an error in console uncaught typeerror cannot read property on of null ref | 1 |
830,239 | 31,997,118,382 | IssuesEvent | 2023-09-21 09:52:58 | EBISPOT/goci | https://api.github.com/repos/EBISPOT/goci | closed | Implement process to detect and update obsoleted EFO terms prior to data release | Priority: High | EFO sometimes obsoletes terms and replaces them with new ones. When obsolete terms have been used in the GWAS Catalog and not updated, this causes the data release to fail.
We need a process to to detect and update obsolete terms to be run prior to data release.
EFO release notes are here:
https://github.com/EBISPOT/efo/blob/master/ExFactor%20Ontology%20release%20notes.txt
See also these tickets:
gwas-utils#15
goci#366 (wider issue of propogating changes in EFO eg term name, definition, to the GWAS catalog, not required now but we should plan for this in the future)
| 1.0 | Implement process to detect and update obsoleted EFO terms prior to data release - EFO sometimes obsoletes terms and replaces them with new ones. When obsolete terms have been used in the GWAS Catalog and not updated, this causes the data release to fail.
We need a process to to detect and update obsolete terms to be run prior to data release.
EFO release notes are here:
https://github.com/EBISPOT/efo/blob/master/ExFactor%20Ontology%20release%20notes.txt
See also these tickets:
gwas-utils#15
goci#366 (wider issue of propogating changes in EFO eg term name, definition, to the GWAS catalog, not required now but we should plan for this in the future)
| priority | implement process to detect and update obsoleted efo terms prior to data release efo sometimes obsoletes terms and replaces them with new ones when obsolete terms have been used in the gwas catalog and not updated this causes the data release to fail we need a process to to detect and update obsolete terms to be run prior to data release efo release notes are here see also these tickets gwas utils goci wider issue of propogating changes in efo eg term name definition to the gwas catalog not required now but we should plan for this in the future | 1 |
541,223 | 15,823,560,705 | IssuesEvent | 2021-04-06 01:04:08 | AY2021S2-CS2103T-T12-4/tp | https://api.github.com/repos/AY2021S2-CS2103T-T12-4/tp | closed | [PE-D] Valid Tag Parameters | priority.High severity.VeryLow type.Bug | valid tag examples:
nature
nature123
invalid tag examples
nature 123
nature nature
error msg for these invalid tag -> tag names should be alphanumeric.
Perhaps the error msg could be more specific? or mention that tag names should not contain space within
<!--session: 1617429946124-258a37f6-cce5-4442-9226-410ccb423b7f-->
-------------
Labels: `severity.Low` `type.FunctionalityBug`
original: Nanxi-Huang/ped#3 | 1.0 | [PE-D] Valid Tag Parameters - valid tag examples:
nature
nature123
invalid tag examples
nature 123
nature nature
error msg for these invalid tag -> tag names should be alphanumeric.
Perhaps the error msg could be more specific? or mention that tag names should not contain space within
<!--session: 1617429946124-258a37f6-cce5-4442-9226-410ccb423b7f-->
-------------
Labels: `severity.Low` `type.FunctionalityBug`
original: Nanxi-Huang/ped#3 | priority | valid tag parameters valid tag examples nature invalid tag examples nature nature nature error msg for these invalid tag tag names should be alphanumeric perhaps the error msg could be more specific or mention that tag names should not contain space within labels severity low type functionalitybug original nanxi huang ped | 1 |
607,270 | 18,778,557,860 | IssuesEvent | 2021-11-08 01:29:06 | Pr47/Pr47 | https://api.github.com/repos/Pr47/Pr47 | closed | Unify `Combustor` and `AsyncCombustor` | C: feature P: high-priority E: medium A: engine K: al31f A: async-await | `Serializer` can also be of good use when doing something tricky. | 1.0 | Unify `Combustor` and `AsyncCombustor` - `Serializer` can also be of good use when doing something tricky. | priority | unify combustor and asynccombustor serializer can also be of good use when doing something tricky | 1 |
809,057 | 30,122,848,806 | IssuesEvent | 2023-06-30 16:37:38 | netlify/next-runtime | https://api.github.com/repos/netlify/next-runtime | closed | Support Custom Route Handlers | priority: high Ecosystem: Frameworks | Next.js 13.2 introduces custom route handlers. See https://nextjs.org/blog/next-13-2#custom-route-handlers. We currently handle routes in pages/api, but we'll need to support this as well for Next 13 users.
For the most part, this should work out of the box because it will be reflected in the routes manifest, but we may need to tweak our logic a bit to support this. | 1.0 | Support Custom Route Handlers - Next.js 13.2 introduces custom route handlers. See https://nextjs.org/blog/next-13-2#custom-route-handlers. We currently handle routes in pages/api, but we'll need to support this as well for Next 13 users.
For the most part, this should work out of the box because it will be reflected in the routes manifest, but we may need to tweak our logic a bit to support this. | priority | support custom route handlers next js introduces custom route handlers see we currently handle routes in pages api but we ll need to support this as well for next users for the most part this should work out of the box because it will be reflected in the routes manifest but we may need to tweak our logic a bit to support this | 1 |
204,463 | 7,087,965,628 | IssuesEvent | 2018-01-11 19:43:34 | SilentChaos512/ScalingHealth | https://api.github.com/repos/SilentChaos512/ScalingHealth | closed | Blight equipment seems pretty good | bug priority: high | 
Caption: "U fukin wot, m8?"
The enchantments go off the page for this blight's gear. I believe it came from a golden blight skelly that dropped into my grinder. My difficulty wasn't even that high. was only like 12 and then it went down to zero with his death.
Mod version is ScalingHealth-1.12-1.3.7-87. | 1.0 | Blight equipment seems pretty good - 
Caption: "U fukin wot, m8?"
The enchantments go off the page for this blight's gear. I believe it came from a golden blight skelly that dropped into my grinder. My difficulty wasn't even that high. was only like 12 and then it went down to zero with his death.
Mod version is ScalingHealth-1.12-1.3.7-87. | priority | blight equipment seems pretty good caption u fukin wot the enchantments go off the page for this blight s gear i believe it came from a golden blight skelly that dropped into my grinder my difficulty wasn t even that high was only like and then it went down to zero with his death mod version is scalinghealth | 1 |
283,420 | 8,719,452,712 | IssuesEvent | 2018-12-08 00:58:01 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | VisIt crashes on startup on Windows Vista | bug crash likelihood high priority reviewed severity high wrong results | When starting VisIt on Windows Vista, the mdserver crashes on startup.
A few users have reported this workaround: to run VisIt with compatibility mode set to NT 4 (service pack 5).
I verified this work-around on my version of Vista, which is Vista Business 64 bit, SP2.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 192
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: VisIt crashes on startup on Windows Vista
Assigned to: Kathleen Biagas
Category:
Target version: 2.1
Author: Kathleen Biagas
Start: 06/30/2010
Due date:
% Done: 0
Estimated time:
Created: 06/30/2010 08:19 pm
Updated: 08/27/2010 05:34 pm
Likelihood: 4 - Common
Severity: 4 - Crash / Wrong Results
Found in version: 2.0.0
Impact:
Expected Use:
OS: Windows
Support Group: Any
Description:
When starting VisIt on Windows Vista, the mdserver crashes on startup.
A few users have reported this workaround: to run VisIt with compatibility mode set to NT 4 (service pack 5).
I verified this work-around on my version of Vista, which is Vista Business 64 bit, SP2.
Comments:
Assignment from LLNL VisIt 2.1 Release Meeting
I downloaded Microsoft's 'Application Compatibility Toolkit' in order to track down why Vista is flagging VisIt as needing to be run in compatibility mode (especially an NT-4 compat mode!)First few passes with the tool appear to indicate that VisIt is attempting to WRITE to HKLM registry files, which should notbe happening. That's the only compatibility issues that cropped up. The registry write does not appear in VisIt's source code.I ran VisIt through a a tool that generates call-stack information to discover the source of the Registry write operations, andit appears to be happening down in GL calls. (wglSwapMultipleBuffers). If this is truly the case, I'm not sure what we can doto mitigate this issue on Vista.
Binaries built with Visual Studio 9.0 (2008) do not have the same issue.Also, binaries built on Vista using Visual Studio 8 do not have this issue.Starting with VisIt 2.1, we will distribute binaries built with Visual Studio 9.
| 1.0 | VisIt crashes on startup on Windows Vista - When starting VisIt on Windows Vista, the mdserver crashes on startup.
A few users have reported this workaround: to run VisIt with compatibility mode set to NT 4 (service pack 5).
I verified this work-around on my version of Vista, which is Vista Business 64 bit, SP2.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 192
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: VisIt crashes on startup on Windows Vista
Assigned to: Kathleen Biagas
Category:
Target version: 2.1
Author: Kathleen Biagas
Start: 06/30/2010
Due date:
% Done: 0
Estimated time:
Created: 06/30/2010 08:19 pm
Updated: 08/27/2010 05:34 pm
Likelihood: 4 - Common
Severity: 4 - Crash / Wrong Results
Found in version: 2.0.0
Impact:
Expected Use:
OS: Windows
Support Group: Any
Description:
When starting VisIt on Windows Vista, the mdserver crashes on startup.
A few users have reported this workaround: to run VisIt with compatibility mode set to NT 4 (service pack 5).
I verified this work-around on my version of Vista, which is Vista Business 64 bit, SP2.
Comments:
Assignment from LLNL VisIt 2.1 Release Meeting
I downloaded Microsoft's 'Application Compatibility Toolkit' in order to track down why Vista is flagging VisIt as needing to be run in compatibility mode (especially an NT-4 compat mode!)First few passes with the tool appear to indicate that VisIt is attempting to WRITE to HKLM registry files, which should notbe happening. That's the only compatibility issues that cropped up. The registry write does not appear in VisIt's source code.I ran VisIt through a a tool that generates call-stack information to discover the source of the Registry write operations, andit appears to be happening down in GL calls. (wglSwapMultipleBuffers). If this is truly the case, I'm not sure what we can doto mitigate this issue on Vista.
Binaries built with Visual Studio 9.0 (2008) do not have the same issue.Also, binaries built on Vista using Visual Studio 8 do not have this issue.Starting with VisIt 2.1, we will distribute binaries built with Visual Studio 9.
| priority | visit crashes on startup on windows vista when starting visit on windows vista the mdserver crashes on startup a few users have reported this workaround to run visit with compatibility mode set to nt service pack i verified this work around on my version of vista which is vista business bit redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject visit crashes on startup on windows vista assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood common severity crash wrong results found in version impact expected use os windows support group any description when starting visit on windows vista the mdserver crashes on startup a few users have reported this workaround to run visit with compatibility mode set to nt service pack i verified this work around on my version of vista which is vista business bit comments assignment from llnl visit release meeting i downloaded microsoft s application compatibility toolkit in order to track down why vista is flagging visit as needing to be run in compatibility mode especially an nt compat mode first few passes with the tool appear to indicate that visit is attempting to write to hklm registry files which should notbe happening that s the only compatibility issues that cropped up the registry write does not appear in visit s source code i ran visit through a a tool that generates call stack information to discover the source of the registry write operations andit appears to be happening down in gl calls wglswapmultiplebuffers if this is truly the case i m not sure what we can doto mitigate this issue on vista binaries built with visual studio do not have the same issue also binaries built on vista using visual studio do not have this issue starting with visit we will distribute binaries built with visual studio | 1 |
374,871 | 11,096,472,535 | IssuesEvent | 2019-12-16 11:13:37 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.gutenberg.org - see bug description | browser-focus-geckoview engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal | <!-- @browser: Firefox Mobile 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.gutenberg.org/ebooks/17731
**Browser / Version**: Firefox Mobile 71.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: I can't back out of this page
**Steps to Reproduce**:
I was led to this page, nothing on there apply to my situation, so I tried to back out. Can't be none.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.gutenberg.org - see bug description - <!-- @browser: Firefox Mobile 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.gutenberg.org/ebooks/17731
**Browser / Version**: Firefox Mobile 71.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: I can't back out of this page
**Steps to Reproduce**:
I was led to this page, nothing on there apply to my situation, so I tried to back out. Can't be none.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description i can t back out of this page steps to reproduce i was led to this page nothing on there apply to my situation so i tried to back out can t be none browser configuration none from with ❤️ | 1 |
227,782 | 7,542,697,901 | IssuesEvent | 2018-04-17 13:42:56 | EdwardHinkle/indigenous-ios | https://api.github.com/repos/EdwardHinkle/indigenous-ios | closed | Send access_token in the POST parameters | Micropub Support enhancement high priority | Wordpress and some other sites require access_token to be sent in post parameters. | 1.0 | Send access_token in the POST parameters - Wordpress and some other sites require access_token to be sent in post parameters. | priority | send access token in the post parameters wordpress and some other sites require access token to be sent in post parameters | 1 |
285,706 | 8,773,553,694 | IssuesEvent | 2018-12-18 17:09:57 | Automattic/simplenote-electron | https://api.github.com/repos/Automattic/simplenote-electron | closed | Note Sync Nudge Could Cause Constant Websocket Reconnect | bug priority-high | I'm still seeing a higher amount of websocket `init` traffic from this app. I think we may have an issue with the `ActivityHooks` code repeatedly reconnecting because it thinks it has an unsynced note.
I can cause many repeated reconnects by adding `v = 0;` [here](https://github.com/Automattic/simplenote-electron/blob/master/lib/utils/sync/nudge-unsynced.js#L21-L26), but I'm not certain this is exactly what is causing this. Still, the way that we connect the client [here](https://github.com/Automattic/simplenote-electron/blob/master/lib/utils/sync/nudge-unsynced.js#L42), and it was added in 1.3.0 when this started occurring, makes me believe that something like this is happening for some users.
Perhaps we should get rid of `ActivityHooks` for now, and just reconnect the websocket during the import process only so we still solve the issue of unsynced notes for large imports?
| 1.0 | Note Sync Nudge Could Cause Constant Websocket Reconnect - I'm still seeing a higher amount of websocket `init` traffic from this app. I think we may have an issue with the `ActivityHooks` code repeatedly reconnecting because it thinks it has an unsynced note.
I can cause many repeated reconnects by adding `v = 0;` [here](https://github.com/Automattic/simplenote-electron/blob/master/lib/utils/sync/nudge-unsynced.js#L21-L26), but I'm not certain this is exactly what is causing this. Still, the way that we connect the client [here](https://github.com/Automattic/simplenote-electron/blob/master/lib/utils/sync/nudge-unsynced.js#L42), and it was added in 1.3.0 when this started occurring, makes me believe that something like this is happening for some users.
Perhaps we should get rid of `ActivityHooks` for now, and just reconnect the websocket during the import process only so we still solve the issue of unsynced notes for large imports?
| priority | note sync nudge could cause constant websocket reconnect i m still seeing a higher amount of websocket init traffic from this app i think we may have an issue with the activityhooks code repeatedly reconnecting because it thinks it has an unsynced note i can cause many repeated reconnects by adding v but i m not certain this is exactly what is causing this still the way that we connect the client and it was added in when this started occurring makes me believe that something like this is happening for some users perhaps we should get rid of activityhooks for now and just reconnect the websocket during the import process only so we still solve the issue of unsynced notes for large imports | 1 |
423,239 | 12,293,256,063 | IssuesEvent | 2020-05-10 18:09:54 | cdnjs/cdnjs | https://api.github.com/repos/cdnjs/cdnjs | closed | [Request] Add jasmine-marbles | :rotating_light: High Priority 🏷 Library Request | **Library name:** jasmine-marbles
**Library description:** Marble testing helpers for RxJS and Jasmine
**Git repository url:** https://github.com/synapse-wireless-labs/jasmine-marbles
**npm package name or url** (if there is one): https://www.npmjs.com/package/jasmine-marbles
**License (List them all if it's multiple):** MIT License
**Official homepage:** https://github.com/synapse-wireless-labs/jasmine-marbles
**Wanna say something? Leave message here:** 55,626 downloads in the last month
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/56349519-request-add-jasmine-marbles?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | [Request] Add jasmine-marbles - **Library name:** jasmine-marbles
**Library description:** Marble testing helpers for RxJS and Jasmine
**Git repository url:** https://github.com/synapse-wireless-labs/jasmine-marbles
**npm package name or url** (if there is one): https://www.npmjs.com/package/jasmine-marbles
**License (List them all if it's multiple):** MIT License
**Official homepage:** https://github.com/synapse-wireless-labs/jasmine-marbles
**Wanna say something? Leave message here:** 55,626 downloads in the last month
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/56349519-request-add-jasmine-marbles?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github).
</bountysource-plugin> | priority | add jasmine marbles library name jasmine marbles library description marble testing helpers for rxjs and jasmine git repository url npm package name or url if there is one license list them all if it s multiple mit license official homepage wanna say something leave message here downloads in the last month want to back this issue we accept bounties via | 1 |
825,051 | 31,240,826,297 | IssuesEvent | 2023-08-20 21:04:33 | bigcapitalhq/bigcapital | https://api.github.com/repos/bigcapitalhq/bigcapital | closed | [BIG-56] Should not write GL entries when save transaction as draft. | bug High priority | If you saved a new sale invoice as draft the system shouldn't write associated GL entries of the invoice unless you publish the invoice out.
* Sale invoice
* Sale Receipt
* Credit Note
* Bill
* Vendor Credit
<sub>From [SyncLinear.com](https://synclinear.com) | [BIG-56](https://linear.app/bigcaptial/issue/BIG-56/should-not-write-gl-entries-when-save-transaction-as-draft)</sub>
[BIG-56]: https://bigcapital.atlassian.net/browse/BIG-56?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [BIG-56] Should not write GL entries when save transaction as draft. - If you saved a new sale invoice as draft the system shouldn't write associated GL entries of the invoice unless you publish the invoice out.
* Sale invoice
* Sale Receipt
* Credit Note
* Bill
* Vendor Credit
<sub>From [SyncLinear.com](https://synclinear.com) | [BIG-56](https://linear.app/bigcaptial/issue/BIG-56/should-not-write-gl-entries-when-save-transaction-as-draft)</sub>
[BIG-56]: https://bigcapital.atlassian.net/browse/BIG-56?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | should not write gl entries when save transaction as draft if you saved a new sale invoice as draft the system shouldn t write associated gl entries of the invoice unless you publish the invoice out sale invoice sale receipt credit note bill vendor credit from | 1 |
594,711 | 18,051,826,322 | IssuesEvent | 2021-09-19 21:48:40 | EdwinParra35/MinTIC_g02_TicDigitalG8 | https://api.github.com/repos/EdwinParra35/MinTIC_g02_TicDigitalG8 | closed | Realizar el wireframe utilizando la herramienta Figma | High priority | El wireframe lo debes de realizar utilizando la herramienta Figma, puedes ingresar al sitio: https://www.figma.com/. Recuerda: Incluye la capacitación en el manejo de la herramienta, como tiempo que utilizas en desarrollo de tu actividad en el proyecto. | 1.0 | Realizar el wireframe utilizando la herramienta Figma - El wireframe lo debes de realizar utilizando la herramienta Figma, puedes ingresar al sitio: https://www.figma.com/. Recuerda: Incluye la capacitación en el manejo de la herramienta, como tiempo que utilizas en desarrollo de tu actividad en el proyecto. | priority | realizar el wireframe utilizando la herramienta figma el wireframe lo debes de realizar utilizando la herramienta figma puedes ingresar al sitio recuerda incluye la capacitación en el manejo de la herramienta como tiempo que utilizas en desarrollo de tu actividad en el proyecto | 1 |
173,016 | 6,519,051,889 | IssuesEvent | 2017-08-28 10:56:10 | VirtoCommerce/vc-platform | https://api.github.com/repos/VirtoCommerce/vc-platform | closed | Not work order products by created date | bug Priority: High | Please provide detailed information about your issue, thank you!
Version info:
- Browser version: All
- Platform version: 2.13.15
### Expected behavior
Products sorted by created date
### Actual behavior
Products not sorted by crearted date
### Steps to reproduce
1. Open category with products

2. Select sort filter "Date, new to old", products sorted

3. Select sort filter "Date, old to new", page reloaded, but list of product not changed

| 1.0 | Not work order products by created date - Please provide detailed information about your issue, thank you!
Version info:
- Browser version: All
- Platform version: 2.13.15
### Expected behavior
Products sorted by created date
### Actual behavior
Products not sorted by crearted date
### Steps to reproduce
1. Open category with products

2. Select sort filter "Date, new to old", products sorted

3. Select sort filter "Date, old to new", page reloaded, but list of product not changed

| priority | not work order products by created date please provide detailed information about your issue thank you version info browser version all platform version expected behavior products sorted by created date actual behavior products not sorted by crearted date steps to reproduce open category with products select sort filter date new to old products sorted select sort filter date old to new page reloaded but list of product not changed | 1 |
471,915 | 13,612,992,283 | IssuesEvent | 2020-09-23 11:10:33 | EBISPOT/goci | https://api.github.com/repos/EBISPOT/goci | closed | Ensure all GWAS Catalog accession IDs in identifiers.org | Priority: High | All GWAS Catalog accession IDs should be resolvable in identifiers.org
Currently published GCSTs are e.g. https://identifiers.org/resolve?query=GCST:GCST005179 but "unpublished" are not e.g. GCST90000032. | 1.0 | Ensure all GWAS Catalog accession IDs in identifiers.org - All GWAS Catalog accession IDs should be resolvable in identifiers.org
Currently published GCSTs are e.g. https://identifiers.org/resolve?query=GCST:GCST005179 but "unpublished" are not e.g. GCST90000032. | priority | ensure all gwas catalog accession ids in identifiers org all gwas catalog accession ids should be resolvable in identifiers org currently published gcsts are e g but unpublished are not e g | 1 |
210,467 | 7,190,145,244 | IssuesEvent | 2018-02-02 16:17:50 | Fourdee/DietPi | https://api.github.com/repos/Fourdee/DietPi | closed | DietPi Software | OpenVPN doesn't work | Debian Stretch Priority High Via Forum bug v6.0 | @Fourdee
http://dietpi.com/phpbb/viewtopic.php?f=11&t=2768
Quick-Check:
RPi Zero W (armv6l) DietPi v6.0
```
Welcome to DietPi-Software
DietPi-Software
─────────────────────────────────────────────────────
Mode: Update & upgrade APT
[ INFO ] APT upgrade, please wait...
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[ OK ] G_AGUG
DietPi-Software
─────────────────────────────────────────────────────
Mode: Checking for prerequisite software
[ INFO ] Rsyslog will be installed
DietPi-Software
─────────────────────────────────────────────────────
Mode: Installing Rsyslog: system logging
[ INFO ] APT installation for: rsyslog --no-install-recommends, please wait...
Selecting previously unselected package liblogging-stdlog0:armhf.
(Reading database ... 22943 files and directories currently installed.)
Preparing to unpack .../liblogging-stdlog0_1.0.5-2_armhf.deb ...
Unpacking liblogging-stdlog0:armhf (1.0.5-2) ...
Selecting previously unselected package libestr0.
Preparing to unpack .../libestr0_0.1.10-2_armhf.deb ...
Unpacking libestr0 (0.1.10-2) ...
Selecting previously unselected package libfastjson4:armhf.
Preparing to unpack .../libfastjson4_0.99.4-1_armhf.deb ...
Unpacking libfastjson4:armhf (0.99.4-1) ...
Selecting previously unselected package liblognorm5:armhf.
Preparing to unpack .../liblognorm5_2.0.1-1.1_armhf.deb ...
Unpacking liblognorm5:armhf (2.0.1-1.1) ...
Selecting previously unselected package rsyslog.
Preparing to unpack .../rsyslog_8.24.0-1_armhf.deb ...
Unpacking rsyslog (8.24.0-1) ...
Setting up libestr0 (0.1.10-2) ...
Setting up libfastjson4:armhf (0.99.4-1) ...
Setting up liblogging-stdlog0:armhf (1.0.5-2) ...
Setting up liblognorm5:armhf (2.0.1-1.1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Processing triggers for systemd (232-25+deb9u1) ...
Setting up rsyslog (8.24.0-1) ...
Created symlink /etc/systemd/system/syslog.service → /lib/systemd/system/rsyslog.service.
Created symlink /etc/systemd/system/multi-user.target.wants/rsyslog.service → /lib/systemd/system/rsyslog.service.
Processing triggers for systemd (232-25+deb9u1) ...
[ OK ] G_AGI: rsyslog --no-install-recommends
DietPi-Services
─────────────────────────────────────────────────────
Mode: stop
[ OK ] stop : cron
[ OK ] stop : nfs-kernel-server
[ OK ] stop : lighttpd
[ OK ] stop : php7.0-fpm
[ OK ] stop : dnsmasq
[ OK ] stop : pihole-FTL
DietPi-Software
─────────────────────────────────────────────────────
Mode: Installing OpenVPN: vpn server
[ INFO ] APT installation for: openvpn easy-rsa iptables, please wait...
Preconfiguring packages ...
Selecting previously unselected package liblzo2-2:armhf.
(Reading database ... 23028 files and directories currently installed.)
Preparing to unpack .../00-liblzo2-2_2.08-1.2_armhf.deb ...
Unpacking liblzo2-2:armhf (2.08-1.2) ...
Selecting previously unselected package libip6tc0:armhf.
Preparing to unpack .../01-libip6tc0_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking libip6tc0:armhf (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package libiptc0:armhf.
Preparing to unpack .../02-libiptc0_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking libiptc0:armhf (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package libxtables12:armhf.
Preparing to unpack .../03-libxtables12_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking libxtables12:armhf (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package iptables.
Preparing to unpack .../04-iptables_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking iptables (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package libpkcs11-helper1:armhf.
Preparing to unpack .../05-libpkcs11-helper1_1.21-1_armhf.deb ...
Unpacking libpkcs11-helper1:armhf (1.21-1) ...
Selecting previously unselected package openvpn.
Preparing to unpack .../06-openvpn_2.4.0-6+deb9u2_armhf.deb ...
Unpacking openvpn (2.4.0-6+deb9u2) ...
Selecting previously unselected package libccid.
Preparing to unpack .../07-libccid_1.4.26-1_armhf.deb ...
Unpacking libccid (1.4.26-1) ...
Selecting previously unselected package pcscd.
Preparing to unpack .../08-pcscd_1.8.20-1_armhf.deb ...
Unpacking pcscd (1.8.20-1) ...
Selecting previously unselected package easy-rsa.
Preparing to unpack .../09-easy-rsa_2.2.2-2_all.deb ...
Unpacking easy-rsa (2.2.2-2) ...
Selecting previously unselected package opensc-pkcs11:armhf.
Preparing to unpack .../10-opensc-pkcs11_0.16.0-3_armhf.deb ...
Unpacking opensc-pkcs11:armhf (0.16.0-3) ...
Selecting previously unselected package opensc.
Preparing to unpack .../11-opensc_0.16.0-3_armhf.deb ...
Unpacking opensc (0.16.0-3) ...
Setting up libpkcs11-helper1:armhf (1.21-1) ...
Setting up opensc-pkcs11:armhf (0.16.0-3) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Setting up libxtables12:armhf (1.6.0+snapshot20161117-6) ...
Processing triggers for systemd (232-25+deb9u1) ...
Setting up easy-rsa (2.2.2-2) ...
Setting up libccid (1.4.26-1) ...
Setting up libip6tc0:armhf (1.6.0+snapshot20161117-6) ...
Setting up liblzo2-2:armhf (2.08-1.2) ...
Setting up opensc (0.16.0-3) ...
Setting up pcscd (1.8.20-1) ...
Created symlink /etc/systemd/system/sockets.target.wants/pcscd.socket → /lib/systemd/system/pcscd.socket.
Setting up libiptc0:armhf (1.6.0+snapshot20161117-6) ...
Setting up openvpn (2.4.0-6+deb9u2) ...
[ ok ] Restarting virtual private network daemon.:.
Created symlink /etc/systemd/system/multi-user.target.wants/openvpn.service → /lib/systemd/system/openvpn.service.
Setting up iptables (1.6.0+snapshot20161117-6) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Processing triggers for systemd (232-25+deb9u1) ...
[ OK ] G_AGI: openvpn easy-rsa iptables
DietPi-Services
─────────────────────────────────────────────────────
Mode: stop
[ OK ] stop : cron
[ OK ] stop : nfs-kernel-server
[ OK ] stop : lighttpd
[ OK ] stop : php7.0-fpm
[ OK ] stop : dnsmasq
[ OK ] stop : pihole-FTL
[ OK ] stop : openvpn
DietPi-Software
─────────────────────────────────────────────────────
Mode: Optimize and configure software
[ INFO ] Applying DietPi optimizations and configurations for RPi Zero W (armv6l), please wait...
DietPi-Software
─────────────────────────────────────────────────────
Mode: Configuring OpenVPN: vpn server
[ INFO ] Generating unique OpenVPN certificates and keys. Please wait...
Generating DH parameters, 1024 bit long safe prime, generator 2
This is going to take a long time
...............................+.....................................................+................+.........................................................+........................+...........+..............................................................................+...............................................................................+.......+..+...+....................................................+..........................+................++*++*++*
**************************************************************
No /etc/openvpn/easy-rsa/openssl.cnf file could be found
Further invocations will fail
**************************************************************
NOTE: If you run ./clean-all, I will be doing a rm -rf on /etc/openvpn/easy-rsa/keys
Using CA Common Name: DietPi_OpenVPN_Server
grep: /etc/openvpn/easy-rsa/openssl.cnf: No such file or directory
pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong
version of openssl.cnf: /etc/openvpn/easy-rsa/openssl.cnf
The correct version should have a comment that says: easy-rsa version 2.x
grep: /etc/openvpn/easy-rsa/openssl.cnf: No such file or directory
pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong
version of openssl.cnf: /etc/openvpn/easy-rsa/openssl.cnf
The correct version should have a comment that says: easy-rsa version 2.x
cp: cannot stat '/etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Server.crt': No such file or directory
cp: cannot stat '/etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Server.key': No such file or directory
cp: cannot stat '/etc/openvpn/easy-rsa/keys/ca.crt': No such file or directory
grep: /etc/openvpn/easy-rsa/openssl.cnf: No such file or directory
pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong
version of openssl.cnf: /etc/openvpn/easy-rsa/openssl.cnf
The correct version should have a comment that says: easy-rsa version 2.x
cat: /etc/openvpn/ca.crt: No such file or directory
cat: /etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Client.crt: No such file or directory
cat: /etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Client.key: No such file or directory
DietPi-Services
─────────────────────────────────────────────────────
Mode: dietpi_controlled
[ OK ] dietpi_controlled : cron
.... reboot
```
-----------
After reboot:
```
root@RPi-Zero:~# dietpi-services status
[ OK ] Root access verified.
DietPi-Services
─────────────────────────────────────────────────────
Mode: status
[ OK ] cron active (running) since Fri 2018-02-02 09:49:32 CET; 1min 31s ago
[ OK ] nfs-kernel-server active (exited) since Fri 2018-02-02 09:49:34 CET; 1min 29s ago
[ OK ] lighttpd active (running) since Fri 2018-02-02 09:49:35 CET; 1min 28s ago
[ OK ] php7.0-fpm active (running) since Fri 2018-02-02 09:49:46 CET; 1min 17s ago
[ OK ] dnsmasq active (running) since Fri 2018-02-02 09:49:48 CET; 1min 16s ago
[ OK ] pihole-FTL active (running) since Fri 2018-02-02 09:49:49 CET; 1min 15s ago
[ OK ] openvpn active (exited) since Fri 2018-02-02 09:49:50 CET; 1min 14s ago
```
```
root@RPi-Zero:~# service openvpn status
● openvpn.service - OpenVPN service
Loaded: loaded (/lib/systemd/system/openvpn.service; disabled; vendor preset: enabled)
Active: active (exited) since Fri 2018-02-02 09:49:50 CET; 1min 54s ago
Process: 1405 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 1405 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/openvpn.service
Feb 02 09:49:50 RPi-Zero systemd[1]: Starting OpenVPN service...
Feb 02 09:49:50 RPi-Zero systemd[1]: Started OpenVPN service.
```
```
root@RPi-Zero:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.100.103 0.0.0.0 UG 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
```
root@RPi-Zero:~# ifconfig | grep tun
root@RPi-Zero:~#
```
```
root@RPi-Zero:~# service openvpn restart
root@RPi-Zero:~# cat /var/log/syslog | grep OpenVPN
Feb 2 15:13:13 RPi-Zero systemd[1]: Started OpenVPN service.
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: --ca fails with 'ca.crt': No such file or directory
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: --cert fails with 'DietPi_OpenVPN_Server.crt': No such file or directory
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: WARNING: cannot stat file 'DietPi_OpenVPN_Server.key': No such file or directory (errno=2)
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: --key fails with 'DietPi_OpenVPN_Server.key': No such file or directory
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: Please correct these errors.
Feb 2 15:13:14 RPi-Zero systemd[1]: openvpn@server.service: Control process exited, code=exited status=1
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Use --help for more information.
Feb 2 15:13:14 RPi-Zero systemd[1]: Failed to start OpenVPN connection to server.
Feb 2 15:13:14 RPi-Zero systemd[1]: openvpn@server.service: Unit entered failed state.
Feb 2 15:13:14 RPi-Zero systemd[1]: openvpn@server.service: Failed with result 'exit-code'.
```
```
root@RPi-Zero:~# ls -lah /etc/openvpn/
total 32K
drwxr-xr-x 5 root root 4.0K Feb 2 09:48 .
drwxr-xr-x 86 root root 4.0K Feb 2 09:48 ..
drwxr-xr-x 2 root root 4.0K Jul 18 2017 client
-rw-r--r-- 1 root root 245 Feb 2 09:48 dh1024.pem
drwxr-xr-x 3 root root 4.0K Feb 2 09:48 easy-rsa
drwxr-xr-x 2 root root 4.0K Jul 18 2017 server
-rw-r--r-- 1 root root 360 Feb 2 09:48 server.conf
-rwxr-xr-x 1 root root 1.3K Jul 18 2017 update-resolv-conf
```
```
root@RPi-Zero:~# ls -lah /etc/openvpn/easy-rsa/
total 124K
drwxr-xr-x 3 root root 4.0K Feb 2 09:48 .
drwxr-xr-x 5 root root 4.0K Feb 2 09:48 ..
-rwxr-xr-x 1 root root 119 Feb 2 09:47 build-ca
-rwxr-xr-x 1 root root 352 Feb 2 09:47 build-dh
-rwxr-xr-x 1 root root 188 Feb 2 09:47 build-inter
-rwxr-xr-x 1 root root 163 Feb 2 09:47 build-key
-rwxr-xr-x 1 root root 157 Feb 2 09:47 build-key-pass
-rwxr-xr-x 1 root root 249 Feb 2 09:47 build-key-pkcs12
-rwxr-xr-x 1 root root 268 Feb 2 09:47 build-key-server
-rwxr-xr-x 1 root root 213 Feb 2 09:47 build-req
-rwxr-xr-x 1 root root 158 Feb 2 09:47 build-req-pass
-rwxr-xr-x 1 root root 449 Feb 2 09:47 clean-all
-rwxr-xr-x 1 root root 1.5K Feb 2 09:47 inherit-inter
drwx------ 2 root root 4.0K Feb 2 09:48 keys
-rwxr-xr-x 1 root root 302 Feb 2 09:47 list-crl
-rwxr-xr-x 1 root root 7.7K Feb 2 09:47 openssl-0.9.6.cnf
-rwxr-xr-x 1 root root 8.3K Feb 2 09:47 openssl-0.9.8.cnf
-rwxr-xr-x 1 root root 8.2K Feb 2 09:47 openssl-1.0.0.cnf
-rwxr-xr-x 1 root root 13K Feb 2 09:47 pkitool
-rwxr-xr-x 1 root root 1.1K Feb 2 09:47 revoke-full
-rwxr-xr-x 1 root root 178 Feb 2 09:47 sign-req
-rwxr-xr-x 1 root root 2.3K Feb 2 09:47 vars
-rwxr-xr-x 1 root root 740 Feb 2 09:47 whichopensslcnf
```
```
root@RPi-Zero:~# ls -lah /etc/openvpn/easy-rsa/keys/
total 16K
drwx------ 2 root root 4.0K Feb 2 09:48 .
drwxr-xr-x 3 root root 4.0K Feb 2 09:48 ..
-rw-r--r-- 1 root root 269 Feb 2 09:48 DietPi_OpenVPN_Client.ovpn
-rw-r--r-- 1 root root 0 Feb 2 09:48 index.txt
-rw-r--r-- 1 root root 3 Feb 2 09:48 serial
``` | 1.0 | DietPi Software | OpenVPN doesn't work - @Fourdee
http://dietpi.com/phpbb/viewtopic.php?f=11&t=2768
Quick-Check:
RPi Zero W (armv6l) DietPi v6.0
```
Welcome to DietPi-Software
DietPi-Software
─────────────────────────────────────────────────────
Mode: Update & upgrade APT
[ INFO ] APT upgrade, please wait...
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[ OK ] G_AGUG
DietPi-Software
─────────────────────────────────────────────────────
Mode: Checking for prerequisite software
[ INFO ] Rsyslog will be installed
DietPi-Software
─────────────────────────────────────────────────────
Mode: Installing Rsyslog: system logging
[ INFO ] APT installation for: rsyslog --no-install-recommends, please wait...
Selecting previously unselected package liblogging-stdlog0:armhf.
(Reading database ... 22943 files and directories currently installed.)
Preparing to unpack .../liblogging-stdlog0_1.0.5-2_armhf.deb ...
Unpacking liblogging-stdlog0:armhf (1.0.5-2) ...
Selecting previously unselected package libestr0.
Preparing to unpack .../libestr0_0.1.10-2_armhf.deb ...
Unpacking libestr0 (0.1.10-2) ...
Selecting previously unselected package libfastjson4:armhf.
Preparing to unpack .../libfastjson4_0.99.4-1_armhf.deb ...
Unpacking libfastjson4:armhf (0.99.4-1) ...
Selecting previously unselected package liblognorm5:armhf.
Preparing to unpack .../liblognorm5_2.0.1-1.1_armhf.deb ...
Unpacking liblognorm5:armhf (2.0.1-1.1) ...
Selecting previously unselected package rsyslog.
Preparing to unpack .../rsyslog_8.24.0-1_armhf.deb ...
Unpacking rsyslog (8.24.0-1) ...
Setting up libestr0 (0.1.10-2) ...
Setting up libfastjson4:armhf (0.99.4-1) ...
Setting up liblogging-stdlog0:armhf (1.0.5-2) ...
Setting up liblognorm5:armhf (2.0.1-1.1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Processing triggers for systemd (232-25+deb9u1) ...
Setting up rsyslog (8.24.0-1) ...
Created symlink /etc/systemd/system/syslog.service → /lib/systemd/system/rsyslog.service.
Created symlink /etc/systemd/system/multi-user.target.wants/rsyslog.service → /lib/systemd/system/rsyslog.service.
Processing triggers for systemd (232-25+deb9u1) ...
[ OK ] G_AGI: rsyslog --no-install-recommends
DietPi-Services
─────────────────────────────────────────────────────
Mode: stop
[ OK ] stop : cron
[ OK ] stop : nfs-kernel-server
[ OK ] stop : lighttpd
[ OK ] stop : php7.0-fpm
[ OK ] stop : dnsmasq
[ OK ] stop : pihole-FTL
DietPi-Software
─────────────────────────────────────────────────────
Mode: Installing OpenVPN: vpn server
[ INFO ] APT installation for: openvpn easy-rsa iptables, please wait...
Preconfiguring packages ...
Selecting previously unselected package liblzo2-2:armhf.
(Reading database ... 23028 files and directories currently installed.)
Preparing to unpack .../00-liblzo2-2_2.08-1.2_armhf.deb ...
Unpacking liblzo2-2:armhf (2.08-1.2) ...
Selecting previously unselected package libip6tc0:armhf.
Preparing to unpack .../01-libip6tc0_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking libip6tc0:armhf (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package libiptc0:armhf.
Preparing to unpack .../02-libiptc0_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking libiptc0:armhf (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package libxtables12:armhf.
Preparing to unpack .../03-libxtables12_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking libxtables12:armhf (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package iptables.
Preparing to unpack .../04-iptables_1.6.0+snapshot20161117-6_armhf.deb ...
Unpacking iptables (1.6.0+snapshot20161117-6) ...
Selecting previously unselected package libpkcs11-helper1:armhf.
Preparing to unpack .../05-libpkcs11-helper1_1.21-1_armhf.deb ...
Unpacking libpkcs11-helper1:armhf (1.21-1) ...
Selecting previously unselected package openvpn.
Preparing to unpack .../06-openvpn_2.4.0-6+deb9u2_armhf.deb ...
Unpacking openvpn (2.4.0-6+deb9u2) ...
Selecting previously unselected package libccid.
Preparing to unpack .../07-libccid_1.4.26-1_armhf.deb ...
Unpacking libccid (1.4.26-1) ...
Selecting previously unselected package pcscd.
Preparing to unpack .../08-pcscd_1.8.20-1_armhf.deb ...
Unpacking pcscd (1.8.20-1) ...
Selecting previously unselected package easy-rsa.
Preparing to unpack .../09-easy-rsa_2.2.2-2_all.deb ...
Unpacking easy-rsa (2.2.2-2) ...
Selecting previously unselected package opensc-pkcs11:armhf.
Preparing to unpack .../10-opensc-pkcs11_0.16.0-3_armhf.deb ...
Unpacking opensc-pkcs11:armhf (0.16.0-3) ...
Selecting previously unselected package opensc.
Preparing to unpack .../11-opensc_0.16.0-3_armhf.deb ...
Unpacking opensc (0.16.0-3) ...
Setting up libpkcs11-helper1:armhf (1.21-1) ...
Setting up opensc-pkcs11:armhf (0.16.0-3) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Setting up libxtables12:armhf (1.6.0+snapshot20161117-6) ...
Processing triggers for systemd (232-25+deb9u1) ...
Setting up easy-rsa (2.2.2-2) ...
Setting up libccid (1.4.26-1) ...
Setting up libip6tc0:armhf (1.6.0+snapshot20161117-6) ...
Setting up liblzo2-2:armhf (2.08-1.2) ...
Setting up opensc (0.16.0-3) ...
Setting up pcscd (1.8.20-1) ...
Created symlink /etc/systemd/system/sockets.target.wants/pcscd.socket → /lib/systemd/system/pcscd.socket.
Setting up libiptc0:armhf (1.6.0+snapshot20161117-6) ...
Setting up openvpn (2.4.0-6+deb9u2) ...
[ ok ] Restarting virtual private network daemon.:.
Created symlink /etc/systemd/system/multi-user.target.wants/openvpn.service → /lib/systemd/system/openvpn.service.
Setting up iptables (1.6.0+snapshot20161117-6) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Processing triggers for systemd (232-25+deb9u1) ...
[ OK ] G_AGI: openvpn easy-rsa iptables
DietPi-Services
─────────────────────────────────────────────────────
Mode: stop
[ OK ] stop : cron
[ OK ] stop : nfs-kernel-server
[ OK ] stop : lighttpd
[ OK ] stop : php7.0-fpm
[ OK ] stop : dnsmasq
[ OK ] stop : pihole-FTL
[ OK ] stop : openvpn
DietPi-Software
─────────────────────────────────────────────────────
Mode: Optimize and configure software
[ INFO ] Applying DietPi optimizations and configurations for RPi Zero W (armv6l), please wait...
DietPi-Software
─────────────────────────────────────────────────────
Mode: Configuring OpenVPN: vpn server
[ INFO ] Generating unique OpenVPN certificates and keys. Please wait...
Generating DH parameters, 1024 bit long safe prime, generator 2
This is going to take a long time
...............................+.....................................................+................+.........................................................+........................+...........+..............................................................................+...............................................................................+.......+..+...+....................................................+..........................+................++*++*++*
**************************************************************
No /etc/openvpn/easy-rsa/openssl.cnf file could be found
Further invocations will fail
**************************************************************
NOTE: If you run ./clean-all, I will be doing a rm -rf on /etc/openvpn/easy-rsa/keys
Using CA Common Name: DietPi_OpenVPN_Server
grep: /etc/openvpn/easy-rsa/openssl.cnf: No such file or directory
pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong
version of openssl.cnf: /etc/openvpn/easy-rsa/openssl.cnf
The correct version should have a comment that says: easy-rsa version 2.x
grep: /etc/openvpn/easy-rsa/openssl.cnf: No such file or directory
pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong
version of openssl.cnf: /etc/openvpn/easy-rsa/openssl.cnf
The correct version should have a comment that says: easy-rsa version 2.x
cp: cannot stat '/etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Server.crt': No such file or directory
cp: cannot stat '/etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Server.key': No such file or directory
cp: cannot stat '/etc/openvpn/easy-rsa/keys/ca.crt': No such file or directory
grep: /etc/openvpn/easy-rsa/openssl.cnf: No such file or directory
pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong
version of openssl.cnf: /etc/openvpn/easy-rsa/openssl.cnf
The correct version should have a comment that says: easy-rsa version 2.x
cat: /etc/openvpn/ca.crt: No such file or directory
cat: /etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Client.crt: No such file or directory
cat: /etc/openvpn/easy-rsa/keys/DietPi_OpenVPN_Client.key: No such file or directory
DietPi-Services
─────────────────────────────────────────────────────
Mode: dietpi_controlled
[ OK ] dietpi_controlled : cron
.... reboot
```
-----------
After reboot:
```
root@RPi-Zero:~# dietpi-services status
[ OK ] Root access verified.
DietPi-Services
─────────────────────────────────────────────────────
Mode: status
[ OK ] cron active (running) since Fri 2018-02-02 09:49:32 CET; 1min 31s ago
[ OK ] nfs-kernel-server active (exited) since Fri 2018-02-02 09:49:34 CET; 1min 29s ago
[ OK ] lighttpd active (running) since Fri 2018-02-02 09:49:35 CET; 1min 28s ago
[ OK ] php7.0-fpm active (running) since Fri 2018-02-02 09:49:46 CET; 1min 17s ago
[ OK ] dnsmasq active (running) since Fri 2018-02-02 09:49:48 CET; 1min 16s ago
[ OK ] pihole-FTL active (running) since Fri 2018-02-02 09:49:49 CET; 1min 15s ago
[ OK ] openvpn active (exited) since Fri 2018-02-02 09:49:50 CET; 1min 14s ago
```
```
root@RPi-Zero:~# service openvpn status
● openvpn.service - OpenVPN service
Loaded: loaded (/lib/systemd/system/openvpn.service; disabled; vendor preset: enabled)
Active: active (exited) since Fri 2018-02-02 09:49:50 CET; 1min 54s ago
Process: 1405 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 1405 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/openvpn.service
Feb 02 09:49:50 RPi-Zero systemd[1]: Starting OpenVPN service...
Feb 02 09:49:50 RPi-Zero systemd[1]: Started OpenVPN service.
```
```
root@RPi-Zero:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.100.103 0.0.0.0 UG 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
```
root@RPi-Zero:~# ifconfig | grep tun
root@RPi-Zero:~#
```
```
root@RPi-Zero:~# service openvpn restart
root@RPi-Zero:~# cat /var/log/syslog | grep OpenVPN
Feb 2 15:13:13 RPi-Zero systemd[1]: Started OpenVPN service.
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: --ca fails with 'ca.crt': No such file or directory
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: --cert fails with 'DietPi_OpenVPN_Server.crt': No such file or directory
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: WARNING: cannot stat file 'DietPi_OpenVPN_Server.key': No such file or directory (errno=2)
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: --key fails with 'DietPi_OpenVPN_Server.key': No such file or directory
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Options error: Please correct these errors.
Feb 2 15:13:14 RPi-Zero systemd[1]: openvpn@server.service: Control process exited, code=exited status=1
Feb 2 15:13:14 RPi-Zero ovpn-server[5608]: Use --help for more information.
Feb 2 15:13:14 RPi-Zero systemd[1]: Failed to start OpenVPN connection to server.
Feb 2 15:13:14 RPi-Zero systemd[1]: openvpn@server.service: Unit entered failed state.
Feb 2 15:13:14 RPi-Zero systemd[1]: openvpn@server.service: Failed with result 'exit-code'.
```
```
root@RPi-Zero:~# ls -lah /etc/openvpn/
total 32K
drwxr-xr-x 5 root root 4.0K Feb 2 09:48 .
drwxr-xr-x 86 root root 4.0K Feb 2 09:48 ..
drwxr-xr-x 2 root root 4.0K Jul 18 2017 client
-rw-r--r-- 1 root root 245 Feb 2 09:48 dh1024.pem
drwxr-xr-x 3 root root 4.0K Feb 2 09:48 easy-rsa
drwxr-xr-x 2 root root 4.0K Jul 18 2017 server
-rw-r--r-- 1 root root 360 Feb 2 09:48 server.conf
-rwxr-xr-x 1 root root 1.3K Jul 18 2017 update-resolv-conf
```
```
root@RPi-Zero:~# ls -lah /etc/openvpn/easy-rsa/
total 124K
drwxr-xr-x 3 root root 4.0K Feb 2 09:48 .
drwxr-xr-x 5 root root 4.0K Feb 2 09:48 ..
-rwxr-xr-x 1 root root 119 Feb 2 09:47 build-ca
-rwxr-xr-x 1 root root 352 Feb 2 09:47 build-dh
-rwxr-xr-x 1 root root 188 Feb 2 09:47 build-inter
-rwxr-xr-x 1 root root 163 Feb 2 09:47 build-key
-rwxr-xr-x 1 root root 157 Feb 2 09:47 build-key-pass
-rwxr-xr-x 1 root root 249 Feb 2 09:47 build-key-pkcs12
-rwxr-xr-x 1 root root 268 Feb 2 09:47 build-key-server
-rwxr-xr-x 1 root root 213 Feb 2 09:47 build-req
-rwxr-xr-x 1 root root 158 Feb 2 09:47 build-req-pass
-rwxr-xr-x 1 root root 449 Feb 2 09:47 clean-all
-rwxr-xr-x 1 root root 1.5K Feb 2 09:47 inherit-inter
drwx------ 2 root root 4.0K Feb 2 09:48 keys
-rwxr-xr-x 1 root root 302 Feb 2 09:47 list-crl
-rwxr-xr-x 1 root root 7.7K Feb 2 09:47 openssl-0.9.6.cnf
-rwxr-xr-x 1 root root 8.3K Feb 2 09:47 openssl-0.9.8.cnf
-rwxr-xr-x 1 root root 8.2K Feb 2 09:47 openssl-1.0.0.cnf
-rwxr-xr-x 1 root root 13K Feb 2 09:47 pkitool
-rwxr-xr-x 1 root root 1.1K Feb 2 09:47 revoke-full
-rwxr-xr-x 1 root root 178 Feb 2 09:47 sign-req
-rwxr-xr-x 1 root root 2.3K Feb 2 09:47 vars
-rwxr-xr-x 1 root root 740 Feb 2 09:47 whichopensslcnf
```
```
root@RPi-Zero:~# ls -lah /etc/openvpn/easy-rsa/keys/
total 16K
drwx------ 2 root root 4.0K Feb 2 09:48 .
drwxr-xr-x 3 root root 4.0K Feb 2 09:48 ..
-rw-r--r-- 1 root root 269 Feb 2 09:48 DietPi_OpenVPN_Client.ovpn
-rw-r--r-- 1 root root 0 Feb 2 09:48 index.txt
-rw-r--r-- 1 root root 3 Feb 2 09:48 serial
``` | priority | dietpi software openvpn doesn t work fourdee quick check rpi zero w dietpi welcome to dietpi software dietpi software ───────────────────────────────────────────────────── mode update upgrade apt apt upgrade please wait reading package lists building dependency tree reading state information calculating upgrade upgraded newly installed to remove and not upgraded g agug dietpi software ───────────────────────────────────────────────────── mode checking for prerequisite software rsyslog will be installed dietpi software ───────────────────────────────────────────────────── mode installing rsyslog system logging apt installation for rsyslog no install recommends please wait selecting previously unselected package liblogging armhf reading database files and directories currently installed preparing to unpack liblogging armhf deb unpacking liblogging armhf selecting previously unselected package preparing to unpack armhf deb unpacking selecting previously unselected package armhf preparing to unpack armhf deb unpacking armhf selecting previously unselected package armhf preparing to unpack armhf deb unpacking armhf selecting previously unselected package rsyslog preparing to unpack rsyslog armhf deb unpacking rsyslog setting up setting up armhf setting up liblogging armhf setting up armhf processing triggers for libc bin processing triggers for systemd setting up rsyslog created symlink etc systemd system syslog service → lib systemd system rsyslog service created symlink etc systemd system multi user target wants rsyslog service → lib systemd system rsyslog service processing triggers for systemd g agi rsyslog no install recommends dietpi services ───────────────────────────────────────────────────── mode stop stop cron stop nfs kernel server stop lighttpd stop fpm stop dnsmasq stop pihole ftl dietpi software ───────────────────────────────────────────────────── mode installing openvpn vpn server apt installation for openvpn easy rsa iptables please wait preconfiguring packages selecting previously unselected package armhf reading database files and directories currently installed preparing to unpack armhf deb unpacking armhf selecting previously unselected package armhf preparing to unpack armhf deb unpacking armhf selecting previously unselected package armhf preparing to unpack armhf deb unpacking armhf selecting previously unselected package armhf preparing to unpack armhf deb unpacking armhf selecting previously unselected package iptables preparing to unpack iptables armhf deb unpacking iptables selecting previously unselected package armhf preparing to unpack armhf deb unpacking armhf selecting previously unselected package openvpn preparing to unpack openvpn armhf deb unpacking openvpn selecting previously unselected package libccid preparing to unpack libccid armhf deb unpacking libccid selecting previously unselected package pcscd preparing to unpack pcscd armhf deb unpacking pcscd selecting previously unselected package easy rsa preparing to unpack easy rsa all deb unpacking easy rsa selecting previously unselected package opensc armhf preparing to unpack opensc armhf deb unpacking opensc armhf selecting previously unselected package opensc preparing to unpack opensc armhf deb unpacking opensc setting up armhf setting up opensc armhf processing triggers for libc bin setting up armhf processing triggers for systemd setting up easy rsa setting up libccid setting up armhf setting up armhf setting up opensc setting up pcscd created symlink etc systemd system sockets target wants pcscd socket → lib systemd system pcscd socket setting up armhf setting up openvpn restarting virtual private network daemon created symlink etc systemd system multi user target wants openvpn service → lib systemd system openvpn service setting up iptables processing triggers for libc bin processing triggers for systemd g agi openvpn easy rsa iptables dietpi services ───────────────────────────────────────────────────── mode stop stop cron stop nfs kernel server stop lighttpd stop fpm stop dnsmasq stop pihole ftl stop openvpn dietpi software ───────────────────────────────────────────────────── mode optimize and configure software applying dietpi optimizations and configurations for rpi zero w please wait dietpi software ───────────────────────────────────────────────────── mode configuring openvpn vpn server generating unique openvpn certificates and keys please wait generating dh parameters bit long safe prime generator this is going to take a long time no etc openvpn easy rsa openssl cnf file could be found further invocations will fail note if you run clean all i will be doing a rm rf on etc openvpn easy rsa keys using ca common name dietpi openvpn server grep etc openvpn easy rsa openssl cnf no such file or directory pkitool key config set by the vars script is pointing to the wrong version of openssl cnf etc openvpn easy rsa openssl cnf the correct version should have a comment that says easy rsa version x grep etc openvpn easy rsa openssl cnf no such file or directory pkitool key config set by the vars script is pointing to the wrong version of openssl cnf etc openvpn easy rsa openssl cnf the correct version should have a comment that says easy rsa version x cp cannot stat etc openvpn easy rsa keys dietpi openvpn server crt no such file or directory cp cannot stat etc openvpn easy rsa keys dietpi openvpn server key no such file or directory cp cannot stat etc openvpn easy rsa keys ca crt no such file or directory grep etc openvpn easy rsa openssl cnf no such file or directory pkitool key config set by the vars script is pointing to the wrong version of openssl cnf etc openvpn easy rsa openssl cnf the correct version should have a comment that says easy rsa version x cat etc openvpn ca crt no such file or directory cat etc openvpn easy rsa keys dietpi openvpn client crt no such file or directory cat etc openvpn easy rsa keys dietpi openvpn client key no such file or directory dietpi services ───────────────────────────────────────────────────── mode dietpi controlled dietpi controlled cron reboot after reboot root rpi zero dietpi services status root access verified dietpi services ───────────────────────────────────────────────────── mode status cron active running since fri cet ago nfs kernel server active exited since fri cet ago lighttpd active running since fri cet ago fpm active running since fri cet ago dnsmasq active running since fri cet ago pihole ftl active running since fri cet ago openvpn active exited since fri cet ago root rpi zero service openvpn status ● openvpn service openvpn service loaded loaded lib systemd system openvpn service disabled vendor preset enabled active active exited since fri cet ago process execstart bin true code exited status success main pid code exited status success cgroup system slice openvpn service feb rpi zero systemd starting openvpn service feb rpi zero systemd started openvpn service root rpi zero route kernel ip routing table destination gateway genmask flags metric ref use iface default ug u root rpi zero ifconfig grep tun root rpi zero root rpi zero service openvpn restart root rpi zero cat var log syslog grep openvpn feb rpi zero systemd started openvpn service feb rpi zero ovpn server options error ca fails with ca crt no such file or directory feb rpi zero ovpn server options error cert fails with dietpi openvpn server crt no such file or directory feb rpi zero ovpn server warning cannot stat file dietpi openvpn server key no such file or directory errno feb rpi zero ovpn server options error key fails with dietpi openvpn server key no such file or directory feb rpi zero ovpn server options error please correct these errors feb rpi zero systemd openvpn server service control process exited code exited status feb rpi zero ovpn server use help for more information feb rpi zero systemd failed to start openvpn connection to server feb rpi zero systemd openvpn server service unit entered failed state feb rpi zero systemd openvpn server service failed with result exit code root rpi zero ls lah etc openvpn total drwxr xr x root root feb drwxr xr x root root feb drwxr xr x root root jul client rw r r root root feb pem drwxr xr x root root feb easy rsa drwxr xr x root root jul server rw r r root root feb server conf rwxr xr x root root jul update resolv conf root rpi zero ls lah etc openvpn easy rsa total drwxr xr x root root feb drwxr xr x root root feb rwxr xr x root root feb build ca rwxr xr x root root feb build dh rwxr xr x root root feb build inter rwxr xr x root root feb build key rwxr xr x root root feb build key pass rwxr xr x root root feb build key rwxr xr x root root feb build key server rwxr xr x root root feb build req rwxr xr x root root feb build req pass rwxr xr x root root feb clean all rwxr xr x root root feb inherit inter drwx root root feb keys rwxr xr x root root feb list crl rwxr xr x root root feb openssl cnf rwxr xr x root root feb openssl cnf rwxr xr x root root feb openssl cnf rwxr xr x root root feb pkitool rwxr xr x root root feb revoke full rwxr xr x root root feb sign req rwxr xr x root root feb vars rwxr xr x root root feb whichopensslcnf root rpi zero ls lah etc openvpn easy rsa keys total drwx root root feb drwxr xr x root root feb rw r r root root feb dietpi openvpn client ovpn rw r r root root feb index txt rw r r root root feb serial | 1 |
822,467 | 30,873,792,118 | IssuesEvent | 2023-08-03 13:06:38 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Printing in the destructor of a node in the edited scene crashes the editor on exit | bug topic:editor high priority crash | ### Godot version
Godot 4.1 e709ad4d6407e52dc62f00a471d13eb6c89f2c4c
### System information
Windows 10 64 bits NVIDIA GeForce GTX 1060
### Issue description
I have a `print_line` in the destructor of a custom class derived from `Node3D`. I use it in verbose mode to track when things happen. In previous versions, it worked fine. But in Godot 4.1, closing the editor crashes:
```
Exception has occurred: W32/0xC0000005
Unhandled exception thrown: read access violation.
this was 0xFFFFFFFFFFFFFFFF.
godot.windows.editor.dev.x86_64.exe!CowData<EditorLog::LogMessage>::_get_size() Line 81 (godot4_fork\core\templates\cowdata.h:81)
godot.windows.editor.dev.x86_64.exe!CowData<EditorLog::LogMessage>::size() Line 131 (godot4_fork\core\templates\cowdata.h:131)
godot.windows.editor.dev.x86_64.exe!Vector<EditorLog::LogMessage>::size() Line 93 (godot4_fork\core\templates\vector.h:93)
godot.windows.editor.dev.x86_64.exe!EditorLog::_process_message(const String & p_msg, EditorLog::MessageType p_type) Line 216 (godot4_fork\editor\editor_log.cpp:216)
godot.windows.editor.dev.x86_64.exe!EditorLog::add_message(const String & p_msg, EditorLog::MessageType p_type) Line 243 (godot4_fork\editor\editor_log.cpp:243)
godot.windows.editor.dev.x86_64.exe!EditorNode::_print_handler_impl(const String & p_string, bool p_error, bool p_rich) Line 6651 (godot4_fork\editor\editor_node.cpp:6651)
godot.windows.editor.dev.x86_64.exe!call_with_variant_args_static<String const &,bool,bool,0,1,2>(void(*)(const String &, bool, bool) p_method, const Variant * * p_args, Callable::CallError & r_error, IndexSequence<0,1,2> __formal) Line 777 (godot4_fork\core\variant\binder_common.h:777)
godot.windows.editor.dev.x86_64.exe!call_with_variant_args_static_ret<String const &,bool,bool>(void(*)(const String &, bool, bool) p_method, const Variant * * p_args, int p_argcount, Variant & r_ret, Callable::CallError & r_error) Line 847 (godot4_fork\core\variant\binder_common.h:847)
godot.windows.editor.dev.x86_64.exe!CallableCustomStaticMethodPointerRet<void,String const &,bool,bool>::call(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 309 (godot4_fork\core\object\callable_method_pointer.h:309)
godot.windows.editor.dev.x86_64.exe!Callable::callp(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 51 (godot4_fork\core\variant\callable.cpp:51)
godot.windows.editor.dev.x86_64.exe!CallableCustomBind::call(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 145 (godot4_fork\core\variant\callable_bind.cpp:145)
godot.windows.editor.dev.x86_64.exe!Callable::callp(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 51 (godot4_fork\core\variant\callable.cpp:51)
godot.windows.editor.dev.x86_64.exe!CallQueue::_call_function(const Callable & p_callable, const Variant * p_args, int p_argcount, bool p_show_error) Line 220 (godot4_fork\core\object\message_queue.cpp:220)
godot.windows.editor.dev.x86_64.exe!CallQueue::flush() Line 322 (godot4_fork\core\object\message_queue.cpp:322)
godot.windows.editor.dev.x86_64.exe!Main::cleanup(bool p_force) Line 3562 (godot4_fork\main\main.cpp:3562)
godot.windows.editor.dev.x86_64.exe!widechar_main(int argc, wchar_t * * argv) Line 184 (godot4_fork\platform\windows\godot_windows.cpp:184)
godot.windows.editor.dev.x86_64.exe!_main() Line 204 (godot4_fork\platform\windows\godot_windows.cpp:204)
godot.windows.editor.dev.x86_64.exe!main(int argc, char * * argv) Line 218 (godot4_fork\platform\windows\godot_windows.cpp:218)
godot.windows.editor.dev.x86_64.exe!WinMain(HINSTANCE__ * hInstance, HINSTANCE__ * hPrevInstance, char * lpCmdLine, int nCmdShow) Line 232 (godot4_fork\platform\windows\godot_windows.cpp:232)
godot.windows.editor.dev.x86_64.exe!invoke_main() Line 107 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:107)
godot.windows.editor.dev.x86_64.exe!__scrt_common_main_seh() Line 288 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
godot.windows.editor.dev.x86_64.exe!__scrt_common_main() Line 331 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:331)
godot.windows.editor.dev.x86_64.exe!WinMainCRTStartup(void * __formal) Line 17 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_winmain.cpp:17)
kernel32.dll!00007ffdcb657614() (Unknown Source:0)
ntdll.dll!00007ffdcb8c26b1() (Unknown Source:0)
```
### Steps to reproduce
My own case is in C++, but it can be reproduced with GDScript.
Have a scene with a node having the following script on it:
```gdscript
@tool
extends Node3D
func _notification(what):
if what == NOTIFICATION_PREDELETE:
print("Destroying ", name)
```
Have that scene open, and close the editor.
Now unfortunately you might not notice anything depending on how your OS handles that (as with many crashes occurring on exit).
I work with debug builds and that definitely makes the crash visible in the debugger. I also use the following command line to build Godot on Windows:
```
scons p=windows target=editor dev_build=yes warnings=all werror=yes debug_crt=yes -j4
```
### Minimal reproduction project
[PrintInNodeDestructor.zip](https://github.com/godotengine/godot/files/12062273/PrintInNodeDestructor.zip)
Open the project, open the scene, then close the editor. | 1.0 | Printing in the destructor of a node in the edited scene crashes the editor on exit - ### Godot version
Godot 4.1 e709ad4d6407e52dc62f00a471d13eb6c89f2c4c
### System information
Windows 10 64 bits NVIDIA GeForce GTX 1060
### Issue description
I have a `print_line` in the destructor of a custom class derived from `Node3D`. I use it in verbose mode to track when things happen. In previous versions, it worked fine. But in Godot 4.1, closing the editor crashes:
```
Exception has occurred: W32/0xC0000005
Unhandled exception thrown: read access violation.
this was 0xFFFFFFFFFFFFFFFF.
godot.windows.editor.dev.x86_64.exe!CowData<EditorLog::LogMessage>::_get_size() Line 81 (godot4_fork\core\templates\cowdata.h:81)
godot.windows.editor.dev.x86_64.exe!CowData<EditorLog::LogMessage>::size() Line 131 (godot4_fork\core\templates\cowdata.h:131)
godot.windows.editor.dev.x86_64.exe!Vector<EditorLog::LogMessage>::size() Line 93 (godot4_fork\core\templates\vector.h:93)
godot.windows.editor.dev.x86_64.exe!EditorLog::_process_message(const String & p_msg, EditorLog::MessageType p_type) Line 216 (godot4_fork\editor\editor_log.cpp:216)
godot.windows.editor.dev.x86_64.exe!EditorLog::add_message(const String & p_msg, EditorLog::MessageType p_type) Line 243 (godot4_fork\editor\editor_log.cpp:243)
godot.windows.editor.dev.x86_64.exe!EditorNode::_print_handler_impl(const String & p_string, bool p_error, bool p_rich) Line 6651 (godot4_fork\editor\editor_node.cpp:6651)
godot.windows.editor.dev.x86_64.exe!call_with_variant_args_static<String const &,bool,bool,0,1,2>(void(*)(const String &, bool, bool) p_method, const Variant * * p_args, Callable::CallError & r_error, IndexSequence<0,1,2> __formal) Line 777 (godot4_fork\core\variant\binder_common.h:777)
godot.windows.editor.dev.x86_64.exe!call_with_variant_args_static_ret<String const &,bool,bool>(void(*)(const String &, bool, bool) p_method, const Variant * * p_args, int p_argcount, Variant & r_ret, Callable::CallError & r_error) Line 847 (godot4_fork\core\variant\binder_common.h:847)
godot.windows.editor.dev.x86_64.exe!CallableCustomStaticMethodPointerRet<void,String const &,bool,bool>::call(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 309 (godot4_fork\core\object\callable_method_pointer.h:309)
godot.windows.editor.dev.x86_64.exe!Callable::callp(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 51 (godot4_fork\core\variant\callable.cpp:51)
godot.windows.editor.dev.x86_64.exe!CallableCustomBind::call(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 145 (godot4_fork\core\variant\callable_bind.cpp:145)
godot.windows.editor.dev.x86_64.exe!Callable::callp(const Variant * * p_arguments, int p_argcount, Variant & r_return_value, Callable::CallError & r_call_error) Line 51 (godot4_fork\core\variant\callable.cpp:51)
godot.windows.editor.dev.x86_64.exe!CallQueue::_call_function(const Callable & p_callable, const Variant * p_args, int p_argcount, bool p_show_error) Line 220 (godot4_fork\core\object\message_queue.cpp:220)
godot.windows.editor.dev.x86_64.exe!CallQueue::flush() Line 322 (godot4_fork\core\object\message_queue.cpp:322)
godot.windows.editor.dev.x86_64.exe!Main::cleanup(bool p_force) Line 3562 (godot4_fork\main\main.cpp:3562)
godot.windows.editor.dev.x86_64.exe!widechar_main(int argc, wchar_t * * argv) Line 184 (godot4_fork\platform\windows\godot_windows.cpp:184)
godot.windows.editor.dev.x86_64.exe!_main() Line 204 (godot4_fork\platform\windows\godot_windows.cpp:204)
godot.windows.editor.dev.x86_64.exe!main(int argc, char * * argv) Line 218 (godot4_fork\platform\windows\godot_windows.cpp:218)
godot.windows.editor.dev.x86_64.exe!WinMain(HINSTANCE__ * hInstance, HINSTANCE__ * hPrevInstance, char * lpCmdLine, int nCmdShow) Line 232 (godot4_fork\platform\windows\godot_windows.cpp:232)
godot.windows.editor.dev.x86_64.exe!invoke_main() Line 107 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:107)
godot.windows.editor.dev.x86_64.exe!__scrt_common_main_seh() Line 288 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
godot.windows.editor.dev.x86_64.exe!__scrt_common_main() Line 331 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:331)
godot.windows.editor.dev.x86_64.exe!WinMainCRTStartup(void * __formal) Line 17 (d:\a01\_work\12\s\src\vctools\crt\vcstartup\src\startup\exe_winmain.cpp:17)
kernel32.dll!00007ffdcb657614() (Unknown Source:0)
ntdll.dll!00007ffdcb8c26b1() (Unknown Source:0)
```
### Steps to reproduce
My own case is in C++, but it can be reproduced with GDScript.
Have a scene with a node having the following script on it:
```gdscript
@tool
extends Node3D
func _notification(what):
if what == NOTIFICATION_PREDELETE:
print("Destroying ", name)
```
Have that scene open, and close the editor.
Now unfortunately you might not notice anything depending on how your OS handles that (as with many crashes occurring on exit).
I work with debug builds and that definitely makes the crash visible in the debugger. I also use the following command line to build Godot on Windows:
```
scons p=windows target=editor dev_build=yes warnings=all werror=yes debug_crt=yes -j4
```
### Minimal reproduction project
[PrintInNodeDestructor.zip](https://github.com/godotengine/godot/files/12062273/PrintInNodeDestructor.zip)
Open the project, open the scene, then close the editor. | priority | printing in the destructor of a node in the edited scene crashes the editor on exit godot version godot system information windows bits nvidia geforce gtx issue description i have a print line in the destructor of a custom class derived from i use it in verbose mode to track when things happen in previous versions it worked fine but in godot closing the editor crashes exception has occurred unhandled exception thrown read access violation this was godot windows editor dev exe cowdata get size line fork core templates cowdata h godot windows editor dev exe cowdata size line fork core templates cowdata h godot windows editor dev exe vector size line fork core templates vector h godot windows editor dev exe editorlog process message const string p msg editorlog messagetype p type line fork editor editor log cpp godot windows editor dev exe editorlog add message const string p msg editorlog messagetype p type line fork editor editor log cpp godot windows editor dev exe editornode print handler impl const string p string bool p error bool p rich line fork editor editor node cpp godot windows editor dev exe call with variant args static void const string bool bool p method const variant p args callable callerror r error indexsequence formal line fork core variant binder common h godot windows editor dev exe call with variant args static ret void const string bool bool p method const variant p args int p argcount variant r ret callable callerror r error line fork core variant binder common h godot windows editor dev exe callablecustomstaticmethodpointerret call const variant p arguments int p argcount variant r return value callable callerror r call error line fork core object callable method pointer h godot windows editor dev exe callable callp const variant p arguments int p argcount variant r return value callable callerror r call error line fork core variant callable cpp godot windows editor dev exe callablecustombind call const variant p arguments int p argcount variant r return value callable callerror r call error line fork core variant callable bind cpp godot windows editor dev exe callable callp const variant p arguments int p argcount variant r return value callable callerror r call error line fork core variant callable cpp godot windows editor dev exe callqueue call function const callable p callable const variant p args int p argcount bool p show error line fork core object message queue cpp godot windows editor dev exe callqueue flush line fork core object message queue cpp godot windows editor dev exe main cleanup bool p force line fork main main cpp godot windows editor dev exe widechar main int argc wchar t argv line fork platform windows godot windows cpp godot windows editor dev exe main line fork platform windows godot windows cpp godot windows editor dev exe main int argc char argv line fork platform windows godot windows cpp godot windows editor dev exe winmain hinstance hinstance hinstance hprevinstance char lpcmdline int ncmdshow line fork platform windows godot windows cpp godot windows editor dev exe invoke main line d work s src vctools crt vcstartup src startup exe common inl godot windows editor dev exe scrt common main seh line d work s src vctools crt vcstartup src startup exe common inl godot windows editor dev exe scrt common main line d work s src vctools crt vcstartup src startup exe common inl godot windows editor dev exe winmaincrtstartup void formal line d work s src vctools crt vcstartup src startup exe winmain cpp dll unknown source ntdll dll unknown source steps to reproduce my own case is in c but it can be reproduced with gdscript have a scene with a node having the following script on it gdscript tool extends func notification what if what notification predelete print destroying name have that scene open and close the editor now unfortunately you might not notice anything depending on how your os handles that as with many crashes occurring on exit i work with debug builds and that definitely makes the crash visible in the debugger i also use the following command line to build godot on windows scons p windows target editor dev build yes warnings all werror yes debug crt yes minimal reproduction project open the project open the scene then close the editor | 1 |
130,368 | 5,114,662,251 | IssuesEvent | 2017-01-06 19:14:11 | BioAssayOntology/BAO | https://api.github.com/repos/BioAssayOntology/BAO | closed | BAO Modeling: A list of questions and suggestions from Linda Zander Balderud | auto-migrated BAO_PharmAZBAOPAZ-5 bug Priority-High Type-Review | <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Saturday May 09, 2015 at 11:32 GMT_
_Originally opened as https://github.com/linikujp/bioassayontology/issues/9_
---
```
Reported by Linda Zander Balderud on 22/Dec/14 10:03 AM
A list of questions and suggestions that are either new or have been discussed
before but are not yet changed in the ontology.
1. Does ’ratio’ belong to ’unit of measurement’ or should it be an
endpoint (see my comment on phenotypic endpoint annotation)?
2. Addition of ‘Secretory alkaline phosphatase reporter gene assay’ and
‘ligase activity assay’ bioassay.
3. Name change from cAMP, cGMP and IP1 redistribition assay to cAMP, cGMP and
IP1 second messenger
4. Assay design method and Physical detection method of HPLC assays?
5. In a knockdown gene expression modulation method, which property should I
use do define the targeted gene?
6. Is there a way to annotate protocol version number, I did see it in a
previous version of BAO.
7. Property to define Image analysis software
8. Property to define emission filter
9. Will dispensing instrument be part of the ontology? I once put together of
instruments we use within AZ as requested from Uma but it seems not to be in
BAO2.0.
10. Missing the role “coating agent” (the role of for example serum albumin
and fibronectin)
11. Change ‘mode of action’ to ‘mechanism of action’ ?
12. Both in-house and within ChEMBL ‘positive allosteric modulation’ and
‘negative allosteric modulation’ is used. Could these be added as
subclasses to ‘allosteric modulation’?
13. ‘Allosteric inhibition’, ‘allosteric activation and ‘allosteric
agonism’ (allosteric agonism and full agonism are misspelled in my version of
BAO) are part of the ontology, could ‘allosteric antagonism’ be added as
well?
14. It would be useful to discuss if the ion channel terms ‘opener’ and
‘blocker’ should be part of the ontology.
```
Original issue reported on code.google.com by `chengqiong7@gmail.com` on 2 Feb 2015 at 7:36
| 1.0 | BAO Modeling: A list of questions and suggestions from Linda Zander Balderud - <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Saturday May 09, 2015 at 11:32 GMT_
_Originally opened as https://github.com/linikujp/bioassayontology/issues/9_
---
```
Reported by Linda Zander Balderud on 22/Dec/14 10:03 AM
A list of questions and suggestions that are either new or have been discussed
before but are not yet changed in the ontology.
1. Does ’ratio’ belong to ’unit of measurement’ or should it be an
endpoint (see my comment on phenotypic endpoint annotation)?
2. Addition of ‘Secretory alkaline phosphatase reporter gene assay’ and
‘ligase activity assay’ bioassay.
3. Name change from cAMP, cGMP and IP1 redistribition assay to cAMP, cGMP and
IP1 second messenger
4. Assay design method and Physical detection method of HPLC assays?
5. In a knockdown gene expression modulation method, which property should I
use do define the targeted gene?
6. Is there a way to annotate protocol version number, I did see it in a
previous version of BAO.
7. Property to define Image analysis software
8. Property to define emission filter
9. Will dispensing instrument be part of the ontology? I once put together of
instruments we use within AZ as requested from Uma but it seems not to be in
BAO2.0.
10. Missing the role “coating agent” (the role of for example serum albumin
and fibronectin)
11. Change ‘mode of action’ to ‘mechanism of action’ ?
12. Both in-house and within ChEMBL ‘positive allosteric modulation’ and
‘negative allosteric modulation’ is used. Could these be added as
subclasses to ‘allosteric modulation’?
13. ‘Allosteric inhibition’, ‘allosteric activation and ‘allosteric
agonism’ (allosteric agonism and full agonism are misspelled in my version of
BAO) are part of the ontology, could ‘allosteric antagonism’ be added as
well?
14. It would be useful to discuss if the ion channel terms ‘opener’ and
‘blocker’ should be part of the ontology.
```
Original issue reported on code.google.com by `chengqiong7@gmail.com` on 2 Feb 2015 at 7:36
| priority | bao modeling a list of questions and suggestions from linda zander balderud issue by saturday may at gmt originally opened as reported by linda zander balderud on dec am a list of questions and suggestions that are either new or have been discussed before but are not yet changed in the ontology does ’ratio’ belong to ’unit of measurement’ or should it be an endpoint see my comment on phenotypic endpoint annotation addition of ‘secretory alkaline phosphatase reporter gene assay’ and ‘ligase activity assay’ bioassay name change from camp cgmp and redistribition assay to camp cgmp and second messenger assay design method and physical detection method of hplc assays in a knockdown gene expression modulation method which property should i use do define the targeted gene is there a way to annotate protocol version number i did see it in a previous version of bao property to define image analysis software property to define emission filter will dispensing instrument be part of the ontology i once put together of instruments we use within az as requested from uma but it seems not to be in missing the role “coating agent” the role of for example serum albumin and fibronectin change ‘mode of action’ to ‘mechanism of action’ both in house and within chembl ‘positive allosteric modulation’ and ‘negative allosteric modulation’ is used could these be added as subclasses to ‘allosteric modulation’ ‘allosteric inhibition’ ‘allosteric activation and ‘allosteric agonism’ allosteric agonism and full agonism are misspelled in my version of bao are part of the ontology could ‘allosteric antagonism’ be added as well it would be useful to discuss if the ion channel terms ‘opener’ and ‘blocker’ should be part of the ontology original issue reported on code google com by gmail com on feb at | 1 |
240,728 | 7,805,179,403 | IssuesEvent | 2018-06-11 09:57:16 | rootio/rootio_web | https://api.github.com/repos/rootio/rootio_web | closed | Click test web portal | High priority enhancement to do | Click test the various buttons and links as a preliminary step to iron out obvious bugs. | 1.0 | Click test web portal - Click test the various buttons and links as a preliminary step to iron out obvious bugs. | priority | click test web portal click test the various buttons and links as a preliminary step to iron out obvious bugs | 1 |
583,963 | 17,402,056,065 | IssuesEvent | 2021-08-02 21:15:21 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Request Publish ends up publishing if the requestor is a publisher | bug priority: high | ## Describe the bug
Request Publish publishes the items instead of sending them for approval.
## To Reproduce
Steps to reproduce the behavior:
1. Edit an item
2. Click on request publish
3. Watch it go live
## Expected behavior
Request Publish should send the item for approval and not live, even if the requestor is an admin.
## Screenshots
N/A
## Logs
N/A
## Specs
### Version
4.0.0-SNAPSHOT on 2021/7/9
### OS
Linux
### Browser
Chrome
## Additional context
None
| 1.0 | [studio-ui] Request Publish ends up publishing if the requestor is a publisher - ## Describe the bug
Request Publish publishes the items instead of sending them for approval.
## To Reproduce
Steps to reproduce the behavior:
1. Edit an item
2. Click on request publish
3. Watch it go live
## Expected behavior
Request Publish should send the item for approval and not live, even if the requestor is an admin.
## Screenshots
N/A
## Logs
N/A
## Specs
### Version
4.0.0-SNAPSHOT on 2021/7/9
### OS
Linux
### Browser
Chrome
## Additional context
None
| priority | request publish ends up publishing if the requestor is a publisher describe the bug request publish publishes the items instead of sending them for approval to reproduce steps to reproduce the behavior edit an item click on request publish watch it go live expected behavior request publish should send the item for approval and not live even if the requestor is an admin screenshots n a logs n a specs version snapshot on os linux browser chrome additional context none | 1 |
735,106 | 25,379,488,452 | IssuesEvent | 2022-11-21 16:25:44 | NomicFoundation/hardhat | https://api.github.com/repos/NomicFoundation/hardhat | closed | Revert due to overflow misdetected as `CONTRACT_TOO_LARGE_ERROR` on a contract compiled with `viaIR: true` | type:bug priority:high package:hardhat-core not-stale | I'm trying to run OpenZeppelin tests using the new, experimental IR code generator of the Solidity compiler (`viaIR: true`) and I'm running into failures that do not happen when using the default code generator. I tracked it down to Hardhat's stack trace module misdetecting a revert caused by an integer overflow as `CONTRACT_TOO_LARGE_ERROR`.
I think that the most likely cause is that some the current heuristics make too strong assumptions about the output from the default code generator and need to be updated to handle the new one as well. I cannot also exclude the possibility that it's actually some kind of problem with compiler's output. Please check it out and let me know if there's actually something that needs to be fixed in the compiler.
### Repro
Just OpenZeppelin tests with version pragmas stripped from contracts and custom compiler settings:
```bash
git clone https://github.com/OpenZeppelin/openzeppelin-contracts.git
cd openzeppelin-contracts
cat <<EOF >> hardhat.config.js
module.exports = {
solidity: {
version: '0.8.10',
settings: {
viaIR: true,
optimizer: {
enabled: true,
runs: 200,
},
},
},
networks: {
hardhat: {
blockGasLimit: 10000000,
allowUnlimitedContractSize: false,
},
},
};
EOF
npm install
find . test -name '*.sol' -type f -print0 | xargs -0 sed -i -E -e 's/pragma solidity [^;]+;/pragma solidity *;/'
npm test test/token/ERC20/extensions/ERC20FlashMint.test.js
```
Note: `ERC20FlashMint.test.js` is not the only test that fails - 57 out of 751 tests fail if you run the full test suite - but it's the one I investigated so far.
#### Output
```
Compiling 219 files with 0.8.10
Compilation finished successfully
Contract: ERC20FlashMint
maxFlashLoan
✓ token match
✓ token mismatch
flashFee
✓ token match
✓ token mismatch (44ms)
flashLoan
✓ success (63ms)
✓ missing return value (51ms)
✓ missing approval (39ms)
✓ unavailable funds (51ms)
(node:181971) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 60)
(node:181971) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
1) more than maxFlashLoan
8 passing (22s)
1 failing
/tmp/openzeppelin-contracts/node_modules/hardhat/internal/hardhat-network/stack-traces/solidity-errors.js:108
return new SolidityCallSite(sourceReference.file.sourceName, sourceReference.contract, sourceReference.function !== undefined
^
TypeError: Cannot read property 'file' of undefined
(Use `node --trace-uncaught ...` to show where the exception was thrown)
```
#### The failing test case
```javascript
it ('more than maxFlashLoan', async function () {
const receiver = await ERC3156FlashBorrowerMock.new(true, true);
const data = this.token.contract.methods.transfer(other, 10).encodeABI();
// _mint overflow reverts using a panic code. No reason string.
await expectRevert.unspecified(this.token.flashLoan(receiver.address, this.token.address, MAX_UINT256, data));
});
``` | 1.0 | Revert due to overflow misdetected as `CONTRACT_TOO_LARGE_ERROR` on a contract compiled with `viaIR: true` - I'm trying to run OpenZeppelin tests using the new, experimental IR code generator of the Solidity compiler (`viaIR: true`) and I'm running into failures that do not happen when using the default code generator. I tracked it down to Hardhat's stack trace module misdetecting a revert caused by an integer overflow as `CONTRACT_TOO_LARGE_ERROR`.
I think that the most likely cause is that some the current heuristics make too strong assumptions about the output from the default code generator and need to be updated to handle the new one as well. I cannot also exclude the possibility that it's actually some kind of problem with compiler's output. Please check it out and let me know if there's actually something that needs to be fixed in the compiler.
### Repro
Just OpenZeppelin tests with version pragmas stripped from contracts and custom compiler settings:
```bash
git clone https://github.com/OpenZeppelin/openzeppelin-contracts.git
cd openzeppelin-contracts
cat <<EOF >> hardhat.config.js
module.exports = {
solidity: {
version: '0.8.10',
settings: {
viaIR: true,
optimizer: {
enabled: true,
runs: 200,
},
},
},
networks: {
hardhat: {
blockGasLimit: 10000000,
allowUnlimitedContractSize: false,
},
},
};
EOF
npm install
find . test -name '*.sol' -type f -print0 | xargs -0 sed -i -E -e 's/pragma solidity [^;]+;/pragma solidity *;/'
npm test test/token/ERC20/extensions/ERC20FlashMint.test.js
```
Note: `ERC20FlashMint.test.js` is not the only test that fails - 57 out of 751 tests fail if you run the full test suite - but it's the one I investigated so far.
#### Output
```
Compiling 219 files with 0.8.10
Compilation finished successfully
Contract: ERC20FlashMint
maxFlashLoan
✓ token match
✓ token mismatch
flashFee
✓ token match
✓ token mismatch (44ms)
flashLoan
✓ success (63ms)
✓ missing return value (51ms)
✓ missing approval (39ms)
✓ unavailable funds (51ms)
(node:181971) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 60)
(node:181971) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
1) more than maxFlashLoan
8 passing (22s)
1 failing
/tmp/openzeppelin-contracts/node_modules/hardhat/internal/hardhat-network/stack-traces/solidity-errors.js:108
return new SolidityCallSite(sourceReference.file.sourceName, sourceReference.contract, sourceReference.function !== undefined
^
TypeError: Cannot read property 'file' of undefined
(Use `node --trace-uncaught ...` to show where the exception was thrown)
```
#### The failing test case
```javascript
it ('more than maxFlashLoan', async function () {
const receiver = await ERC3156FlashBorrowerMock.new(true, true);
const data = this.token.contract.methods.transfer(other, 10).encodeABI();
// _mint overflow reverts using a panic code. No reason string.
await expectRevert.unspecified(this.token.flashLoan(receiver.address, this.token.address, MAX_UINT256, data));
});
``` | priority | revert due to overflow misdetected as contract too large error on a contract compiled with viair true i m trying to run openzeppelin tests using the new experimental ir code generator of the solidity compiler viair true and i m running into failures that do not happen when using the default code generator i tracked it down to hardhat s stack trace module misdetecting a revert caused by an integer overflow as contract too large error i think that the most likely cause is that some the current heuristics make too strong assumptions about the output from the default code generator and need to be updated to handle the new one as well i cannot also exclude the possibility that it s actually some kind of problem with compiler s output please check it out and let me know if there s actually something that needs to be fixed in the compiler repro just openzeppelin tests with version pragmas stripped from contracts and custom compiler settings bash git clone cd openzeppelin contracts cat hardhat config js module exports solidity version settings viair true optimizer enabled true runs networks hardhat blockgaslimit allowunlimitedcontractsize false eof npm install find test name sol type f xargs sed i e e s pragma solidity pragma solidity npm test test token extensions test js note test js is not the only test that fails out of tests fail if you run the full test suite but it s the one i investigated so far output compiling files with compilation finished successfully contract maxflashloan ✓ token match ✓ token mismatch flashfee ✓ token match ✓ token mismatch flashloan ✓ success ✓ missing return value ✓ missing approval ✓ unavailable funds node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch to terminate the node process on unhandled promise rejection use the cli flag unhandled rejections strict see rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code more than maxflashloan passing failing tmp openzeppelin contracts node modules hardhat internal hardhat network stack traces solidity errors js return new soliditycallsite sourcereference file sourcename sourcereference contract sourcereference function undefined typeerror cannot read property file of undefined use node trace uncaught to show where the exception was thrown the failing test case javascript it more than maxflashloan async function const receiver await new true true const data this token contract methods transfer other encodeabi mint overflow reverts using a panic code no reason string await expectrevert unspecified this token flashloan receiver address this token address max data | 1 |
565,950 | 16,773,428,851 | IssuesEvent | 2021-06-14 17:35:07 | fecgov/fec-proxy | https://api.github.com/repos/fecgov/fec-proxy | opened | Redirect three e-regs to a Federal Register notice | High priority Pairing opportunity | We need to update https://github.com/fecgov/fec-proxy/blob/develop/nginx.conf to exclude three e-regs that we are going to redirect to a federal register notice until we are able to update e-regs for 2021.
These three pages:
https://www.fec.gov/regulations/111-24/2020-annual-111#111-24
https://www.fec.gov/regulations/111-43/2020-annual-111#111-43
https://www.fec.gov/regulations/111-44/2020-annual-111#111-44
will need to redirect to https://sers.fec.gov/fosers/showpdf.htm?docid=412715.
| 1.0 | Redirect three e-regs to a Federal Register notice - We need to update https://github.com/fecgov/fec-proxy/blob/develop/nginx.conf to exclude three e-regs that we are going to redirect to a federal register notice until we are able to update e-regs for 2021.
These three pages:
https://www.fec.gov/regulations/111-24/2020-annual-111#111-24
https://www.fec.gov/regulations/111-43/2020-annual-111#111-43
https://www.fec.gov/regulations/111-44/2020-annual-111#111-44
will need to redirect to https://sers.fec.gov/fosers/showpdf.htm?docid=412715.
| priority | redirect three e regs to a federal register notice we need to update to exclude three e regs that we are going to redirect to a federal register notice until we are able to update e regs for these three pages will need to redirect to | 1 |
796,296 | 28,105,860,284 | IssuesEvent | 2023-03-31 00:32:27 | department-of-veterans-affairs/abd-vro | https://api.github.com/repos/department-of-veterans-affairs/abd-vro | closed | Notify a Slack channel and email addresses of form 526 submissions on va.gov (needed by 10/17) | Engineer blocked high priority | ***Target production deployment by 10/14***
**User Story**
- As a designated claim assistant, I want to receive email notifications about newly submitted 526 forms from va.gov, so that I can classify the contention and set the PACT Act special issue as appropriate.
- As a VRO team member, I want to track newly submitted 526 forms from va.gov in a Slack channel so that I have visibility into claims being passed to the MAS claim handoff endpoint
The emails and slack channel should be notified of all form 526 submissions on va.gov after claim establishment, excluding:
- single issue hypertension claims for increase
- claims with pending EPs
- claims where @decision_code is "NOTSVCCON" (See https://github.com/department-of-veterans-affairs/abd-vro/issues/2440)
The notifications should use the same format and include the same information as the notifications currently being tracked in #rrd-to-mas-tracking.
**Acceptance Criteria**
1. Create a new slack channel #va-gov-to-MAS-tracking
2. Notify the above slack channel plus a set of email addresses (@emilytheis will provide the list) when new 526s are submitted on va.gov, after claim establishment
3. Single issue hypertension claims for increase, as well as claims with pending EPs and claims where @decision_code is "NOTSVCCON" are excluded from these notifications
4. The notifications have the same format and information as the notifications in #rrd-to-mas-tracking
**Not included in this work**
This ticket covers listening for 526 submissions and notifying slack/email when those claims are established, but does not include passing those claims to the MAS Claims Handoff endpoint, which will be handled in a separate ticket.
**Notes about work**
- Relevant [slack thread](https://dsva.slack.com/archives/C03JRKKD11B/p1664465285664379)
| 1.0 | Notify a Slack channel and email addresses of form 526 submissions on va.gov (needed by 10/17) - ***Target production deployment by 10/14***
**User Story**
- As a designated claim assistant, I want to receive email notifications about newly submitted 526 forms from va.gov, so that I can classify the contention and set the PACT Act special issue as appropriate.
- As a VRO team member, I want to track newly submitted 526 forms from va.gov in a Slack channel so that I have visibility into claims being passed to the MAS claim handoff endpoint
The emails and slack channel should be notified of all form 526 submissions on va.gov after claim establishment, excluding:
- single issue hypertension claims for increase
- claims with pending EPs
- claims where @decision_code is "NOTSVCCON" (See https://github.com/department-of-veterans-affairs/abd-vro/issues/2440)
The notifications should use the same format and include the same information as the notifications currently being tracked in #rrd-to-mas-tracking.
**Acceptance Criteria**
1. Create a new slack channel #va-gov-to-MAS-tracking
2. Notify the above slack channel plus a set of email addresses (@emilytheis will provide the list) when new 526s are submitted on va.gov, after claim establishment
3. Single issue hypertension claims for increase, as well as claims with pending EPs and claims where @decision_code is "NOTSVCCON" are excluded from these notifications
4. The notifications have the same format and information as the notifications in #rrd-to-mas-tracking
**Not included in this work**
This ticket covers listening for 526 submissions and notifying slack/email when those claims are established, but does not include passing those claims to the MAS Claims Handoff endpoint, which will be handled in a separate ticket.
**Notes about work**
- Relevant [slack thread](https://dsva.slack.com/archives/C03JRKKD11B/p1664465285664379)
| priority | notify a slack channel and email addresses of form submissions on va gov needed by target production deployment by user story as a designated claim assistant i want to receive email notifications about newly submitted forms from va gov so that i can classify the contention and set the pact act special issue as appropriate as a vro team member i want to track newly submitted forms from va gov in a slack channel so that i have visibility into claims being passed to the mas claim handoff endpoint the emails and slack channel should be notified of all form submissions on va gov after claim establishment excluding single issue hypertension claims for increase claims with pending eps claims where decision code is notsvccon see the notifications should use the same format and include the same information as the notifications currently being tracked in rrd to mas tracking acceptance criteria create a new slack channel va gov to mas tracking notify the above slack channel plus a set of email addresses emilytheis will provide the list when new are submitted on va gov after claim establishment single issue hypertension claims for increase as well as claims with pending eps and claims where decision code is notsvccon are excluded from these notifications the notifications have the same format and information as the notifications in rrd to mas tracking not included in this work this ticket covers listening for submissions and notifying slack email when those claims are established but does not include passing those claims to the mas claims handoff endpoint which will be handled in a separate ticket notes about work relevant | 1 |
337,209 | 10,211,821,358 | IssuesEvent | 2019-08-14 17:54:06 | Alluxio/alluxio | https://api.github.com/repos/Alluxio/alluxio | closed | Support changing embedded journal masters without reformat | priority-high target-2.0.1 type-feature | **Is your feature request related to a problem? Please describe.**
The new embedded journal requires that the list of master hostnames stay the same. This makes it impossible to grow or shrink the number of masters without re-formatting the journal.
**Describe the solution you'd like**
An admin tool that allows masters to be added or removed from a running cluster. Even better, a tool that could add/remove masters whether or not the cluster is currently running.
**Describe alternatives you've considered**
It's possible to use DNS to migrate masters without reformat (same hostname, different machine). However, this doesn't work for growing/shrinking the cluster. | 1.0 | Support changing embedded journal masters without reformat - **Is your feature request related to a problem? Please describe.**
The new embedded journal requires that the list of master hostnames stay the same. This makes it impossible to grow or shrink the number of masters without re-formatting the journal.
**Describe the solution you'd like**
An admin tool that allows masters to be added or removed from a running cluster. Even better, a tool that could add/remove masters whether or not the cluster is currently running.
**Describe alternatives you've considered**
It's possible to use DNS to migrate masters without reformat (same hostname, different machine). However, this doesn't work for growing/shrinking the cluster. | priority | support changing embedded journal masters without reformat is your feature request related to a problem please describe the new embedded journal requires that the list of master hostnames stay the same this makes it impossible to grow or shrink the number of masters without re formatting the journal describe the solution you d like an admin tool that allows masters to be added or removed from a running cluster even better a tool that could add remove masters whether or not the cluster is currently running describe alternatives you ve considered it s possible to use dns to migrate masters without reformat same hostname different machine however this doesn t work for growing shrinking the cluster | 1 |
714,113 | 24,551,160,826 | IssuesEvent | 2022-10-12 12:43:09 | unfoldingWord/translationCore | https://api.github.com/repos/unfoldingWord/translationCore | closed | Handle " & " for discontiguous original language highlights | QA/Pass Priority/High | According to [the new spec](https://forum.door43.org/t/parascriptural-tab-separated-value-format-specification-v2/870) ellipsis is changed from ... to " & " and this breaks the highlighting in the scripture pane. Both of them should work at least for now.
DoD:
---
" & " in the original quote should highlight the discontinuous words
Details:
---
Any use of the " & " or the ellipsis in the original quote field should highlight discontiguous words. There can be multiple & per original quote. | 1.0 | Handle " & " for discontiguous original language highlights - According to [the new spec](https://forum.door43.org/t/parascriptural-tab-separated-value-format-specification-v2/870) ellipsis is changed from ... to " & " and this breaks the highlighting in the scripture pane. Both of them should work at least for now.
DoD:
---
" & " in the original quote should highlight the discontinuous words
Details:
---
Any use of the " & " or the ellipsis in the original quote field should highlight discontiguous words. There can be multiple & per original quote. | priority | handle for discontiguous original language highlights according to ellipsis is changed from to and this breaks the highlighting in the scripture pane both of them should work at least for now dod in the original quote should highlight the discontinuous words details any use of the or the ellipsis in the original quote field should highlight discontiguous words there can be multiple per original quote | 1 |
39,399 | 2,854,388,721 | IssuesEvent | 2015-06-02 00:05:23 | diydrones/dronekit-python | https://api.github.com/repos/diydrones/dronekit-python | closed | Automodule rebuilding ALWAYs returns to an old version | [docs] bug [priority] high | Hi Ramon
Old version of API ref up AGAIN http://python.dronekit.io/automodule.html
Can we fix this once and for all?
Cheers
H | 1.0 | Automodule rebuilding ALWAYs returns to an old version - Hi Ramon
Old version of API ref up AGAIN http://python.dronekit.io/automodule.html
Can we fix this once and for all?
Cheers
H | priority | automodule rebuilding always returns to an old version hi ramon old version of api ref up again can we fix this once and for all cheers h | 1 |
325,204 | 9,921,168,174 | IssuesEvent | 2019-06-30 15:52:34 | r-lib/styler | https://api.github.com/repos/r-lib/styler | closed | styler fails to correctly style syntactic rlang sugar {{ | Complexity: Low Priority: High Status: WIP Type: Enhancement | Hi!
I noticed that the new `{{ }}` operator from `rlang` is not styled properly; for instance, if I style the following lines
```r
library(rlang)
name <- "Jane"
list2({{name}} := 1 + 2)
```
I get the following:
```r
library(rlang)
name <- "Jane"
list2({
{
name
}
} := 1 + 2)
```
Not sure what would be a good way of handling this specific operator (maybe @lionel- has an idea), I just thought about reporting it.
Anyway, thanks for `styler`: absolutely love the package!
Alessandro | 1.0 | styler fails to correctly style syntactic rlang sugar {{ - Hi!
I noticed that the new `{{ }}` operator from `rlang` is not styled properly; for instance, if I style the following lines
```r
library(rlang)
name <- "Jane"
list2({{name}} := 1 + 2)
```
I get the following:
```r
library(rlang)
name <- "Jane"
list2({
{
name
}
} := 1 + 2)
```
Not sure what would be a good way of handling this specific operator (maybe @lionel- has an idea), I just thought about reporting it.
Anyway, thanks for `styler`: absolutely love the package!
Alessandro | priority | styler fails to correctly style syntactic rlang sugar hi i noticed that the new operator from rlang is not styled properly for instance if i style the following lines r library rlang name jane name i get the following r library rlang name jane name not sure what would be a good way of handling this specific operator maybe lionel has an idea i just thought about reporting it anyway thanks for styler absolutely love the package alessandro | 1 |
528,038 | 15,358,848,694 | IssuesEvent | 2021-03-01 15:13:46 | PMEAL/OpenPNM | https://api.github.com/repos/PMEAL/OpenPNM | reopened | Should we automatically apply the Import geometry object when importing data? | high priority maintenance | At present some classes do and some do not. I'm trying to make it all more consistent. | 1.0 | Should we automatically apply the Import geometry object when importing data? - At present some classes do and some do not. I'm trying to make it all more consistent. | priority | should we automatically apply the import geometry object when importing data at present some classes do and some do not i m trying to make it all more consistent | 1 |
453,465 | 13,080,025,213 | IssuesEvent | 2020-08-01 05:47:09 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | No container display after Statful service created under muti-cluster project | area/multicluster kind/bug kind/need-to-verify priority/high | Describe the Bug
No container display after Statful service created under muti-cluster project
Versions Used
KubeSphere:3.0.0
Environment
testing env
http://139.198.12.26:30880/
How To Reproduce
Steps to reproduce the behavior:
1.Go to 'Access control' from 'platform management' of home page
2.Click on 'multi-cluster-ws' namespace
3.Click 'project management' and switch to 'multi-cluster project' tab
4.Click 'muticlustor-project' and then click application menu under application load
5.Create one statful service correctly
6.No container display for this statful service

Expected behavior
Should display correct container
/kind bug
/area multicluster
/assign @zryfish
/milestone 3.0.0 | 1.0 | No container display after Statful service created under muti-cluster project - Describe the Bug
No container display after Statful service created under muti-cluster project
Versions Used
KubeSphere:3.0.0
Environment
testing env
http://139.198.12.26:30880/
How To Reproduce
Steps to reproduce the behavior:
1.Go to 'Access control' from 'platform management' of home page
2.Click on 'multi-cluster-ws' namespace
3.Click 'project management' and switch to 'multi-cluster project' tab
4.Click 'muticlustor-project' and then click application menu under application load
5.Create one statful service correctly
6.No container display for this statful service

Expected behavior
Should display correct container
/kind bug
/area multicluster
/assign @zryfish
/milestone 3.0.0 | priority | no container display after statful service created under muti cluster project describe the bug no container display after statful service created under muti cluster project versions used kubesphere environment testing env how to reproduce steps to reproduce the behavior go to access control from platform management of home page click on multi cluster ws namespace click project management and switch to multi cluster project tab click muticlustor project and then click application menu under application load create one statful service correctly no container display for this statful service expected behavior should display correct container kind bug area multicluster assign zryfish milestone | 1 |
810,612 | 30,250,366,805 | IssuesEvent | 2023-07-06 20:01:29 | PrefectHQ/prefect | https://api.github.com/repos/PrefectHQ/prefect | closed | NotReady tasks and subflows are not visualized in the timeline view. | bug status:stale ui priority:high from:sales Quick win orchestration | ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I refreshed the page and this issue still occurred.
- [X] I checked if this issue was specific to the browser I was using by testing with a different browser.
### Bug summary

You can see the downstream_task_j in the task runs page as being `NotReady` but it is not shown in the timeline view. Showing the NotReady tasks would allow users to still see the dependency relationships that exist between task_j and upstream nodes.
This relationship between nodes that ran and nodes that were considered NotReady was visualized in the radar chart and in the V1 graph.
### Reproduction
[Flow Code in Github](https://github.com/taylor-curran/flow-patterns/blob/main/flows/subflows/task_wrapped_deployments.py
)
```
from prefect import flow, task
from prefect.deployments import run_deployment
from prefect.task_runners import ConcurrentTaskRunner
from pydantic import BaseModel
@task()
def upstream_task_h():
print("upstream task")
return {"h": "upstream task"}
@task()
def upstream_task_i():
print("upstream task")
return {"i": "upstream task"}
@task()
def wrapper_task_a(i, sim_failure_child_flow_a):
print("wrapper task")
a = run_deployment(
"child-flow-a/dep-child-a",
parameters={"i": i, "sim_failure_child_flow_a": sim_failure_child_flow_a},
)
return {"a": a.state.result()}
@task()
def wrapper_task_b(sim_failure_child_flow_b):
print("wrapper task")
b = run_deployment(
name="child-flow-b/dep-child-b",
parameters={"sim_failure_child_flow_b": sim_failure_child_flow_b},
)
# WARNING: We do not evaluate the result or state in this
# wrapper task decoupling this wrapper task from its
# subflow's state.
return {"b": "not flow result"}
@task()
def wrapper_task_c():
print("wrapper task")
c = run_deployment("child-flow-c/dep-child-c")
return {"c": c.state.result()}
@task()
def downstream_task_j(a):
print("downstream task")
return {"j": "downstream task"}
@task()
async def downstream_task_j(a, c, sim_failure_downstream_task_j):
if sim_failure_downstream_task_j:
raise Exception("This is a test exception")
else:
print("downstream task")
return {"j": "downstream task"}
@task()
def downstream_task_k(b="b"):
print(b)
print("downstream task")
return {"k": "downstream task"}
# ---
class SimulatedFailure(BaseModel):
child_flow_a: bool = False
child_flow_b: bool = False
downstream_task_j: bool = False
default_simulated_failure = SimulatedFailure(
child_flow_a=False, child_flow_b=False, downstream_task_j=False
)
# prefect deployment build task_wrapped_deployments.py:task_wrapped_deployments -n dep_task_wrapped -t sub-flows -t task-wrapped -t parent -a
@flow(task_runner=ConcurrentTaskRunner(), persist_result=True)
def task_wrapped_deployments(sim_failure: SimulatedFailure = default_simulated_failure):
h = upstream_task_h.submit()
i = upstream_task_i.submit()
a = wrapper_task_a.submit(i, sim_failure.child_flow_a)
b = wrapper_task_b.submit(
sim_failure_child_flow_b=sim_failure.child_flow_b, wait_for=[i]
)
c = wrapper_task_c.submit()
j = downstream_task_j.submit(a, c, sim_failure.downstream_task_j)
k = downstream_task_k.submit(wait_for=[b])
return {"j": j, "k": k}
# ---
if __name__ == "__main__":
task_wrapped_deployments(
sim_failure=SimulatedFailure(
child_flow_a=False, child_flow_b=True, downstream_task_j=False
)
)
```
### Error

### Browers
- [X] Chrome
- [ ] Firefox
- [ ] Safari
- [ ] Edge
### Prefect version
```Text
Version: 2.8.6
API version: 0.8.4
Python version: 3.10.10
Git commit: 061d877b
Built: Thu, Mar 16, 2023 2:58 PM
OS/Arch: darwin/arm64
Profile: default
Server type: cloud
```
### Additional context
_No response_ | 1.0 | NotReady tasks and subflows are not visualized in the timeline view. - ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I refreshed the page and this issue still occurred.
- [X] I checked if this issue was specific to the browser I was using by testing with a different browser.
### Bug summary

You can see the downstream_task_j in the task runs page as being `NotReady` but it is not shown in the timeline view. Showing the NotReady tasks would allow users to still see the dependency relationships that exist between task_j and upstream nodes.
This relationship between nodes that ran and nodes that were considered NotReady was visualized in the radar chart and in the V1 graph.
### Reproduction
[Flow Code in Github](https://github.com/taylor-curran/flow-patterns/blob/main/flows/subflows/task_wrapped_deployments.py
)
```
from prefect import flow, task
from prefect.deployments import run_deployment
from prefect.task_runners import ConcurrentTaskRunner
from pydantic import BaseModel
@task()
def upstream_task_h():
print("upstream task")
return {"h": "upstream task"}
@task()
def upstream_task_i():
print("upstream task")
return {"i": "upstream task"}
@task()
def wrapper_task_a(i, sim_failure_child_flow_a):
print("wrapper task")
a = run_deployment(
"child-flow-a/dep-child-a",
parameters={"i": i, "sim_failure_child_flow_a": sim_failure_child_flow_a},
)
return {"a": a.state.result()}
@task()
def wrapper_task_b(sim_failure_child_flow_b):
print("wrapper task")
b = run_deployment(
name="child-flow-b/dep-child-b",
parameters={"sim_failure_child_flow_b": sim_failure_child_flow_b},
)
# WARNING: We do not evaluate the result or state in this
# wrapper task decoupling this wrapper task from its
# subflow's state.
return {"b": "not flow result"}
@task()
def wrapper_task_c():
print("wrapper task")
c = run_deployment("child-flow-c/dep-child-c")
return {"c": c.state.result()}
@task()
def downstream_task_j(a):
print("downstream task")
return {"j": "downstream task"}
@task()
async def downstream_task_j(a, c, sim_failure_downstream_task_j):
if sim_failure_downstream_task_j:
raise Exception("This is a test exception")
else:
print("downstream task")
return {"j": "downstream task"}
@task()
def downstream_task_k(b="b"):
print(b)
print("downstream task")
return {"k": "downstream task"}
# ---
class SimulatedFailure(BaseModel):
child_flow_a: bool = False
child_flow_b: bool = False
downstream_task_j: bool = False
default_simulated_failure = SimulatedFailure(
child_flow_a=False, child_flow_b=False, downstream_task_j=False
)
# prefect deployment build task_wrapped_deployments.py:task_wrapped_deployments -n dep_task_wrapped -t sub-flows -t task-wrapped -t parent -a
@flow(task_runner=ConcurrentTaskRunner(), persist_result=True)
def task_wrapped_deployments(sim_failure: SimulatedFailure = default_simulated_failure):
h = upstream_task_h.submit()
i = upstream_task_i.submit()
a = wrapper_task_a.submit(i, sim_failure.child_flow_a)
b = wrapper_task_b.submit(
sim_failure_child_flow_b=sim_failure.child_flow_b, wait_for=[i]
)
c = wrapper_task_c.submit()
j = downstream_task_j.submit(a, c, sim_failure.downstream_task_j)
k = downstream_task_k.submit(wait_for=[b])
return {"j": j, "k": k}
# ---
if __name__ == "__main__":
task_wrapped_deployments(
sim_failure=SimulatedFailure(
child_flow_a=False, child_flow_b=True, downstream_task_j=False
)
)
```
### Error

### Browers
- [X] Chrome
- [ ] Firefox
- [ ] Safari
- [ ] Edge
### Prefect version
```Text
Version: 2.8.6
API version: 0.8.4
Python version: 3.10.10
Git commit: 061d877b
Built: Thu, Mar 16, 2023 2:58 PM
OS/Arch: darwin/arm64
Profile: default
Server type: cloud
```
### Additional context
_No response_ | priority | notready tasks and subflows are not visualized in the timeline view first check i added a descriptive title to this issue i used the github search to find a similar issue and didn t find it i refreshed the page and this issue still occurred i checked if this issue was specific to the browser i was using by testing with a different browser bug summary you can see the downstream task j in the task runs page as being notready but it is not shown in the timeline view showing the notready tasks would allow users to still see the dependency relationships that exist between task j and upstream nodes this relationship between nodes that ran and nodes that were considered notready was visualized in the radar chart and in the graph reproduction from prefect import flow task from prefect deployments import run deployment from prefect task runners import concurrenttaskrunner from pydantic import basemodel task def upstream task h print upstream task return h upstream task task def upstream task i print upstream task return i upstream task task def wrapper task a i sim failure child flow a print wrapper task a run deployment child flow a dep child a parameters i i sim failure child flow a sim failure child flow a return a a state result task def wrapper task b sim failure child flow b print wrapper task b run deployment name child flow b dep child b parameters sim failure child flow b sim failure child flow b warning we do not evaluate the result or state in this wrapper task decoupling this wrapper task from its subflow s state return b not flow result task def wrapper task c print wrapper task c run deployment child flow c dep child c return c c state result task def downstream task j a print downstream task return j downstream task task async def downstream task j a c sim failure downstream task j if sim failure downstream task j raise exception this is a test exception else print downstream task return j downstream task task def downstream task k b b print b print downstream task return k downstream task class simulatedfailure basemodel child flow a bool false child flow b bool false downstream task j bool false default simulated failure simulatedfailure child flow a false child flow b false downstream task j false prefect deployment build task wrapped deployments py task wrapped deployments n dep task wrapped t sub flows t task wrapped t parent a flow task runner concurrenttaskrunner persist result true def task wrapped deployments sim failure simulatedfailure default simulated failure h upstream task h submit i upstream task i submit a wrapper task a submit i sim failure child flow a b wrapper task b submit sim failure child flow b sim failure child flow b wait for c wrapper task c submit j downstream task j submit a c sim failure downstream task j k downstream task k submit wait for return j j k k if name main task wrapped deployments sim failure simulatedfailure child flow a false child flow b true downstream task j false error browers chrome firefox safari edge prefect version text version api version python version git commit built thu mar pm os arch darwin profile default server type cloud additional context no response | 1 |
385,590 | 11,423,085,477 | IssuesEvent | 2020-02-03 15:18:50 | zowe/sample-spring-boot-api-service | https://api.github.com/repos/zowe/sample-spring-boot-api-service | reopened | zowe-api-dev command for i18n and messages | Priority: High Type: Enhancement no-issue-activity | Tooling that simplifies localization and checks its correctness and can generate a message reference for numbered error messages.
- Create a new `zowe-api-dev messages i18n` command that will:
- Create a new localization file - `messages_{languageCode}.properties` initialized with non-localized and commented text from `messages.yml` that needs to be localized
- Checks that existing localization files `messages_{languageCode}.properties` have all the messages keys covered. This will help developers to find out that a localization is missing for a new message
- Checks that existing localization does not have any extra keys starting with `messages.`. To find out extra localization for keys that were removed/renamed in `messages.yml`
- New `zowe-api-dev messages docgen`:
- Uses @jandadav's docgen approach that generates a Markdown document that can be used in Zowe documentation with a message reference | 1.0 | zowe-api-dev command for i18n and messages - Tooling that simplifies localization and checks its correctness and can generate a message reference for numbered error messages.
- Create a new `zowe-api-dev messages i18n` command that will:
- Create a new localization file - `messages_{languageCode}.properties` initialized with non-localized and commented text from `messages.yml` that needs to be localized
- Checks that existing localization files `messages_{languageCode}.properties` have all the messages keys covered. This will help developers to find out that a localization is missing for a new message
- Checks that existing localization does not have any extra keys starting with `messages.`. To find out extra localization for keys that were removed/renamed in `messages.yml`
- New `zowe-api-dev messages docgen`:
- Uses @jandadav's docgen approach that generates a Markdown document that can be used in Zowe documentation with a message reference | priority | zowe api dev command for and messages tooling that simplifies localization and checks its correctness and can generate a message reference for numbered error messages create a new zowe api dev messages command that will create a new localization file messages languagecode properties initialized with non localized and commented text from messages yml that needs to be localized checks that existing localization files messages languagecode properties have all the messages keys covered this will help developers to find out that a localization is missing for a new message checks that existing localization does not have any extra keys starting with messages to find out extra localization for keys that were removed renamed in messages yml new zowe api dev messages docgen uses jandadav s docgen approach that generates a markdown document that can be used in zowe documentation with a message reference | 1 |
303,094 | 9,302,323,289 | IssuesEvent | 2019-03-24 08:26:10 | trimstray/htrace.sh | https://api.github.com/repos/trimstray/htrace.sh | closed | Improving the mechanism for checking the HTRACE_COLORS | Priority: High Status: Completed Type: Enhancement v1.1.4 | - remove unnecessary code
- change `HTRACE_COLORS` parser | 1.0 | Improving the mechanism for checking the HTRACE_COLORS - - remove unnecessary code
- change `HTRACE_COLORS` parser | priority | improving the mechanism for checking the htrace colors remove unnecessary code change htrace colors parser | 1 |
635,518 | 20,404,510,611 | IssuesEvent | 2022-02-23 02:34:26 | gilhrpenner/COMP4350 | https://api.github.com/repos/gilhrpenner/COMP4350 | opened | Create frontend component to view/edit account details | dev task frontend high priority | ## Description
When a user is logged-in, at the top right corner will be a button with their profile image, once the user clicks that button their profile details will be shown.
## Acceptance Criteria
- User must fill all mandatory fields
- Data must be persisted
User story: [Edit my account details](https://github.com/gilhrpenner/COMP4350/issues/108)\
Pre-requisites: N/A
| 1.0 | Create frontend component to view/edit account details - ## Description
When a user is logged-in, at the top right corner will be a button with their profile image, once the user clicks that button their profile details will be shown.
## Acceptance Criteria
- User must fill all mandatory fields
- Data must be persisted
User story: [Edit my account details](https://github.com/gilhrpenner/COMP4350/issues/108)\
Pre-requisites: N/A
| priority | create frontend component to view edit account details description when a user is logged in at the top right corner will be a button with their profile image once the user clicks that button their profile details will be shown acceptance criteria user must fill all mandatory fields data must be persisted user story pre requisites n a | 1 |
96,284 | 3,966,949,980 | IssuesEvent | 2016-05-03 14:43:51 | ozwillo/ozwillo-portal | https://api.github.com/repos/ozwillo/ozwillo-portal | opened | App store improvements - Step #1 | component : store priority : high type : enhancement | - Hide services of an application « behind » the application
- [TODO] Find a (ergonomic) way to access services
- Clean up the store (long descriptions, dead applications, ...)
- Normalize and document applications metadata (texts sizes, ...)
- Display an application details in its own page and anymore in a modal dialog (hard to deal with, not ergonomic and difficult to make accessible)
| 1.0 | App store improvements - Step #1 - - Hide services of an application « behind » the application
- [TODO] Find a (ergonomic) way to access services
- Clean up the store (long descriptions, dead applications, ...)
- Normalize and document applications metadata (texts sizes, ...)
- Display an application details in its own page and anymore in a modal dialog (hard to deal with, not ergonomic and difficult to make accessible)
| priority | app store improvements step hide services of an application « behind » the application find a ergonomic way to access services clean up the store long descriptions dead applications normalize and document applications metadata texts sizes display an application details in its own page and anymore in a modal dialog hard to deal with not ergonomic and difficult to make accessible | 1 |
471,481 | 13,578,191,888 | IssuesEvent | 2020-09-20 06:16:43 | codeRIT/hackathon-manager | https://api.github.com/repos/codeRIT/hackathon-manager | closed | Notify admins when bus captain leaves bus list | 2.0 feature help wanted high priority | If a bus captain leaves the bus list, or RSVP denies, they'll be removed as a bus captain.
Admins (or some email) should be notified of this happening.
Might be best implemented with:
* New mailer
* `after_save` callback on the Questionnaire model that triggers the email if `is_bus_captain` goes from true -> false | 1.0 | Notify admins when bus captain leaves bus list - If a bus captain leaves the bus list, or RSVP denies, they'll be removed as a bus captain.
Admins (or some email) should be notified of this happening.
Might be best implemented with:
* New mailer
* `after_save` callback on the Questionnaire model that triggers the email if `is_bus_captain` goes from true -> false | priority | notify admins when bus captain leaves bus list if a bus captain leaves the bus list or rsvp denies they ll be removed as a bus captain admins or some email should be notified of this happening might be best implemented with new mailer after save callback on the questionnaire model that triggers the email if is bus captain goes from true false | 1 |
602,377 | 18,467,905,869 | IssuesEvent | 2021-10-17 07:57:46 | AY2122S1-CS2113T-F11-4/tp | https://api.github.com/repos/AY2122S1-CS2113T-F11-4/tp | opened | Redesign the EditCommand() | priority.High type.Feature | Implement the logic of determining which property to update inside EditCommand instead of parser | 1.0 | Redesign the EditCommand() - Implement the logic of determining which property to update inside EditCommand instead of parser | priority | redesign the editcommand implement the logic of determining which property to update inside editcommand instead of parser | 1 |
444,370 | 12,810,513,951 | IssuesEvent | 2020-07-03 18:56:13 | TASVideos/BizHawk | https://api.github.com/repos/TASVideos/BizHawk | closed | TAStudio Leaks Memory, even after closing | Priority: High Repro: Affects 2.4.2 Repro: Affects 2.5 dev Tool: TAStudio | Repro: Load a game. Open TAStudio. Close TAStudio. Unpause and Let the game run normally.
Watch in task manager as memory usage increases be tens of MB per second.
| 1.0 | TAStudio Leaks Memory, even after closing - Repro: Load a game. Open TAStudio. Close TAStudio. Unpause and Let the game run normally.
Watch in task manager as memory usage increases be tens of MB per second.
| priority | tastudio leaks memory even after closing repro load a game open tastudio close tastudio unpause and let the game run normally watch in task manager as memory usage increases be tens of mb per second | 1 |
26,968 | 2,689,379,614 | IssuesEvent | 2015-03-31 09:48:43 | jackjonesfashion/tasks | https://api.github.com/repos/jackjonesfashion/tasks | closed | Brandsites - Testing | In progress Priority: High Task | Testing of individual brandsites
All brands
- [x] Add conditionals to scroll tracking
Core:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General
Originals:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video (NA)
- [x] Scripting: General
Premium:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General
Vintage:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General
Tech:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General | 1.0 | Brandsites - Testing - Testing of individual brandsites
All brands
- [x] Add conditionals to scroll tracking
Core:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General
Originals:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video (NA)
- [x] Scripting: General
Premium:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General
Vintage:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General
Tech:
- [x] Styling: Visual test
- [x] Assets: Load and performance (Staging / production)
- [x] Scripting: Image switching
- [x] Scripting: Tracking
- [x] Scripting: Video
- [x] Scripting: General | priority | brandsites testing testing of individual brandsites all brands add conditionals to scroll tracking core styling visual test assets load and performance staging production scripting image switching scripting tracking scripting video scripting general originals styling visual test assets load and performance staging production scripting image switching scripting tracking scripting video na scripting general premium styling visual test assets load and performance staging production scripting image switching scripting tracking scripting video scripting general vintage styling visual test assets load and performance staging production scripting image switching scripting tracking scripting video scripting general tech styling visual test assets load and performance staging production scripting image switching scripting tracking scripting video scripting general | 1 |
754,157 | 26,374,147,831 | IssuesEvent | 2023-01-11 23:56:12 | geopm/geopm | https://api.github.com/repos/geopm/geopm | closed | geopmd exceptions lost on shutdown | bug bug-priority-high bug-exposure-high bug-quality-low | **Describe the bug**
Exceptions that are generated within geopmd are lost on shutdown.
**GEOPM version**
4fdf753b3
**Expected behavior**
A clear and concise description of what actually happened is emitted to the console (or syslog).
**Actual behavior**
If an exception occurs while geopmd is running, no message is emitted to the console (or syslog). When running manually, control simply returns to the terminal. | 1.0 | geopmd exceptions lost on shutdown - **Describe the bug**
Exceptions that are generated within geopmd are lost on shutdown.
**GEOPM version**
4fdf753b3
**Expected behavior**
A clear and concise description of what actually happened is emitted to the console (or syslog).
**Actual behavior**
If an exception occurs while geopmd is running, no message is emitted to the console (or syslog). When running manually, control simply returns to the terminal. | priority | geopmd exceptions lost on shutdown describe the bug exceptions that are generated within geopmd are lost on shutdown geopm version expected behavior a clear and concise description of what actually happened is emitted to the console or syslog actual behavior if an exception occurs while geopmd is running no message is emitted to the console or syslog when running manually control simply returns to the terminal | 1 |
270,299 | 8,454,190,016 | IssuesEvent | 2018-10-20 23:31:03 | google/synthmark | https://api.github.com/repos/google/synthmark | closed | add test to measure governor response time | enhancement high priority | Add a new test that gives a clear measure of the time to recover when the work load increases. LatencyMark restarts the test with a bigger buffer when there is a glitch. Add a test that just measures how long it takes to render in real-time again after a workload increase. | 1.0 | add test to measure governor response time - Add a new test that gives a clear measure of the time to recover when the work load increases. LatencyMark restarts the test with a bigger buffer when there is a glitch. Add a test that just measures how long it takes to render in real-time again after a workload increase. | priority | add test to measure governor response time add a new test that gives a clear measure of the time to recover when the work load increases latencymark restarts the test with a bigger buffer when there is a glitch add a test that just measures how long it takes to render in real time again after a workload increase | 1 |
684,522 | 23,421,296,809 | IssuesEvent | 2022-08-13 18:40:04 | tfussell/xlnt | https://api.github.com/repos/tfussell/xlnt | closed | Unknown attribute 'ref' in `parser::pop_element()` for certain files | bug high priority | _This might be the same issue as reported in issue #433, but wanted to create a separate one just in case._
From my observations this seems to be triggered when cells were auto-populated by excel after dragging the corner of a cell (or multiple cells), i.e. doing this:

I've create two files that seem to be identical, but one was created by writing down the formulas by hand, while the other one was created dragging the first row down:
* [working.xlsx](https://github.com/tfussell/xlnt/files/4043172/working.xlsx) (manually written)
* [broken.xlsx](https://github.com/tfussell/xlnt/files/4043174/broken.xlsx) (dragged cell down) | 1.0 | Unknown attribute 'ref' in `parser::pop_element()` for certain files - _This might be the same issue as reported in issue #433, but wanted to create a separate one just in case._
From my observations this seems to be triggered when cells were auto-populated by excel after dragging the corner of a cell (or multiple cells), i.e. doing this:

I've create two files that seem to be identical, but one was created by writing down the formulas by hand, while the other one was created dragging the first row down:
* [working.xlsx](https://github.com/tfussell/xlnt/files/4043172/working.xlsx) (manually written)
* [broken.xlsx](https://github.com/tfussell/xlnt/files/4043174/broken.xlsx) (dragged cell down) | priority | unknown attribute ref in parser pop element for certain files this might be the same issue as reported in issue but wanted to create a separate one just in case from my observations this seems to be triggered when cells were auto populated by excel after dragging the corner of a cell or multiple cells i e doing this i ve create two files that seem to be identical but one was created by writing down the formulas by hand while the other one was created dragging the first row down manually written dragged cell down | 1 |
485,651 | 13,996,659,077 | IssuesEvent | 2020-10-28 06:25:00 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.gofundme.com - see bug description | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal | <!-- @browser: Firefox Mobile 17:03 201023 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60578 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.gofundme.com/
**Browser / Version**: Firefox Mobile 17:03 201023
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: Seemed to cause my browser to crash.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201022093646</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/188901e9-acb2-497c-adcf-18cb60e97c68)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.gofundme.com - see bug description - <!-- @browser: Firefox Mobile 17:03 201023 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60578 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.gofundme.com/
**Browser / Version**: Firefox Mobile 17:03 201023
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: Seemed to cause my browser to crash.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201022093646</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/188901e9-acb2-497c-adcf-18cb60e97c68)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description seemed to cause my browser to crash steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
179,002 | 6,620,726,543 | IssuesEvent | 2017-09-21 16:29:02 | ow2-proactive/scheduling | https://api.github.com/repos/ow2-proactive/scheduling | closed | On Windows 7, RunAsMe tasks fail | priority:high severity:major type:bug | with the following exception:
```
org.ow2.proactive.scheduler.task.exceptions.ForkedJvmProcessException: Failed to execute task in a forked JVM
at org.ow2.proactive.scheduler.task.executors.ForkedTaskExecutor.createTaskResult(ForkedTaskExecutor.java:139)
at org.ow2.proactive.scheduler.task.executors.ForkedTaskExecutor.execute(ForkedTaskExecutor.java:108)
at org.ow2.proactive.scheduler.task.TaskLauncher.doTask(TaskLauncher.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.objectweb.proactive.core.mop.MethodCall.execute(MethodCall.java:353)
at org.objectweb.proactive.core.body.request.RequestImpl.serveInternal(RequestImpl.java:214)
at org.objectweb.proactive.core.body.request.RequestImpl.serve(RequestImpl.java:160)
at org.objectweb.proactive.core.body.BodyImpl$ActiveLocalBodyStrategy.serveInternal(BodyImpl.java:552)
at org.objectweb.proactive.core.body.BodyImpl$ActiveLocalBodyStrategy.serve(BodyImpl.java:485)
at org.objectweb.proactive.core.body.AbstractBody.serve(AbstractBody.java:426)
at org.objectweb.proactive.Service.blockingServeOldest(Service.java:206)
at org.objectweb.proactive.Service.blockingServeOldest(Service.java:181)
at org.objectweb.proactive.Service.fifoServing(Service.java:146)
at org.objectweb.proactive.core.body.ActiveBody$FIFORunActive.runActivity(ActiveBody.java:337)
at org.objectweb.proactive.core.body.ActiveBody.run(ActiveBody.java:175)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Forked task failed to remove serialized task context, probably a permission issue on folder H:\tmp\PA_JVM2138173081\precision_15412\204\484680568
... 18 more
```
The forker JVM does not have the rights to remove files created by the subprocess running with a different user (even though full rights have been set to the folder H:\tmp for all users)
| 1.0 | On Windows 7, RunAsMe tasks fail - with the following exception:
```
org.ow2.proactive.scheduler.task.exceptions.ForkedJvmProcessException: Failed to execute task in a forked JVM
at org.ow2.proactive.scheduler.task.executors.ForkedTaskExecutor.createTaskResult(ForkedTaskExecutor.java:139)
at org.ow2.proactive.scheduler.task.executors.ForkedTaskExecutor.execute(ForkedTaskExecutor.java:108)
at org.ow2.proactive.scheduler.task.TaskLauncher.doTask(TaskLauncher.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.objectweb.proactive.core.mop.MethodCall.execute(MethodCall.java:353)
at org.objectweb.proactive.core.body.request.RequestImpl.serveInternal(RequestImpl.java:214)
at org.objectweb.proactive.core.body.request.RequestImpl.serve(RequestImpl.java:160)
at org.objectweb.proactive.core.body.BodyImpl$ActiveLocalBodyStrategy.serveInternal(BodyImpl.java:552)
at org.objectweb.proactive.core.body.BodyImpl$ActiveLocalBodyStrategy.serve(BodyImpl.java:485)
at org.objectweb.proactive.core.body.AbstractBody.serve(AbstractBody.java:426)
at org.objectweb.proactive.Service.blockingServeOldest(Service.java:206)
at org.objectweb.proactive.Service.blockingServeOldest(Service.java:181)
at org.objectweb.proactive.Service.fifoServing(Service.java:146)
at org.objectweb.proactive.core.body.ActiveBody$FIFORunActive.runActivity(ActiveBody.java:337)
at org.objectweb.proactive.core.body.ActiveBody.run(ActiveBody.java:175)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Forked task failed to remove serialized task context, probably a permission issue on folder H:\tmp\PA_JVM2138173081\precision_15412\204\484680568
... 18 more
```
The forker JVM does not have the rights to remove files created by the subprocess running with a different user (even though full rights have been set to the folder H:\tmp for all users)
| priority | on windows runasme tasks fail with the following exception org proactive scheduler task exceptions forkedjvmprocessexception failed to execute task in a forked jvm at org proactive scheduler task executors forkedtaskexecutor createtaskresult forkedtaskexecutor java at org proactive scheduler task executors forkedtaskexecutor execute forkedtaskexecutor java at org proactive scheduler task tasklauncher dotask tasklauncher java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org objectweb proactive core mop methodcall execute methodcall java at org objectweb proactive core body request requestimpl serveinternal requestimpl java at org objectweb proactive core body request requestimpl serve requestimpl java at org objectweb proactive core body bodyimpl activelocalbodystrategy serveinternal bodyimpl java at org objectweb proactive core body bodyimpl activelocalbodystrategy serve bodyimpl java at org objectweb proactive core body abstractbody serve abstractbody java at org objectweb proactive service blockingserveoldest service java at org objectweb proactive service blockingserveoldest service java at org objectweb proactive service fifoserving service java at org objectweb proactive core body activebody fiforunactive runactivity activebody java at org objectweb proactive core body activebody run activebody java at java lang thread run thread java caused by java io ioexception forked task failed to remove serialized task context probably a permission issue on folder h tmp pa precision more the forker jvm does not have the rights to remove files created by the subprocess running with a different user even though full rights have been set to the folder h tmp for all users | 1 |
553,739 | 16,381,447,870 | IssuesEvent | 2021-05-17 03:49:07 | uioz/mfe-proxy-server | https://api.github.com/repos/uioz/mfe-proxy-server | closed | 处理 MPA 项目的时候如果 mfe-route 没有提供 domain 则未匹配的内容指向 index 属性 | enhancement hight priority | # 说明
为了简化入口文件路由规则的复杂度, 对于 MPA 项目, 如果没有指定 `domain` 则所有的响应都返回 `index` 属性所对应的页面. | 1.0 | 处理 MPA 项目的时候如果 mfe-route 没有提供 domain 则未匹配的内容指向 index 属性 - # 说明
为了简化入口文件路由规则的复杂度, 对于 MPA 项目, 如果没有指定 `domain` 则所有的响应都返回 `index` 属性所对应的页面. | priority | 处理 mpa 项目的时候如果 mfe route 没有提供 domain 则未匹配的内容指向 index 属性 说明 为了简化入口文件路由规则的复杂度 对于 mpa 项目 如果没有指定 domain 则所有的响应都返回 index 属性所对应的页面 | 1 |
3,479 | 2,538,472,012 | IssuesEvent | 2015-01-27 07:11:44 | newca12/gapt | https://api.github.com/repos/newca12/gapt | closed | AndLeftRule test fails | 1 star bug imported Priority-High | _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on December 04, 2010 00:56:36_
testing LK: the LKTest fails to compile due to missing AndLeftRule. Adding the macro package makes the test compiles but then it fails on the AndLeftRule test:
The factories/extractors for LK should work for AndLeftRule Time elapsed: 0.004 sec \<<< ERROR!
org.specs.runner.UserError: at.logic.calculi.lk.base.LKRuleCreationException: Formulas to be contracted are of the same occurrence
at at.logic.calculi.lk.propositionalRules.ContractionLeftRule$.apply(propositionalRules.scala:137)
at at.logic.calculi.lk.macroRules.AndLeftRule$.apply(macroRules.scala:23)
_Original issue: http://code.google.com/p/gapt/issues/detail?id=93_ | 1.0 | AndLeftRule test fails - _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on December 04, 2010 00:56:36_
testing LK: the LKTest fails to compile due to missing AndLeftRule. Adding the macro package makes the test compiles but then it fails on the AndLeftRule test:
The factories/extractors for LK should work for AndLeftRule Time elapsed: 0.004 sec \<<< ERROR!
org.specs.runner.UserError: at.logic.calculi.lk.base.LKRuleCreationException: Formulas to be contracted are of the same occurrence
at at.logic.calculi.lk.propositionalRules.ContractionLeftRule$.apply(propositionalRules.scala:137)
at at.logic.calculi.lk.macroRules.AndLeftRule$.apply(macroRules.scala:23)
_Original issue: http://code.google.com/p/gapt/issues/detail?id=93_ | priority | andleftrule test fails from on december testing lk the lktest fails to compile due to missing andleftrule adding the macro package makes the test compiles but then it fails on the andleftrule test the factories extractors for lk should work for andleftrule time elapsed sec error org specs runner usererror at logic calculi lk base lkrulecreationexception formulas to be contracted are of the same occurrence at at logic calculi lk propositionalrules contractionleftrule apply propositionalrules scala at at logic calculi lk macrorules andleftrule apply macrorules scala original issue | 1 |
562,349 | 16,657,748,793 | IssuesEvent | 2021-06-05 20:56:09 | LPD-VRChat/LPD-Officer-Monitor | https://api.github.com/repos/LPD-VRChat/LPD-Officer-Monitor | closed | Adjust inactivity system before next inactive sweep | High Priority | The parameters for someone to be marked as inactive need to be changed slightly.
1 hour within the last 60 days.
Or a message in a monitored channel in the last 14 days.
| 1.0 | Adjust inactivity system before next inactive sweep - The parameters for someone to be marked as inactive need to be changed slightly.
1 hour within the last 60 days.
Or a message in a monitored channel in the last 14 days.
| priority | adjust inactivity system before next inactive sweep the parameters for someone to be marked as inactive need to be changed slightly hour within the last days or a message in a monitored channel in the last days | 1 |
679,819 | 23,246,486,880 | IssuesEvent | 2022-08-03 20:44:10 | intel/cve-bin-tool | https://api.github.com/repos/intel/cve-bin-tool | closed | Feature Request: improved CVE overview (HTML reports) | enhancement higher priority | Feature request received by email:
> There will be great to mark all flags on the main page (count of CVEs of each status: New, Confirmed, Mitigated, Unexplored, Ignored.) Maybe some colors and lines on Product CVEs diagram, also?
| 1.0 | Feature Request: improved CVE overview (HTML reports) - Feature request received by email:
> There will be great to mark all flags on the main page (count of CVEs of each status: New, Confirmed, Mitigated, Unexplored, Ignored.) Maybe some colors and lines on Product CVEs diagram, also?
| priority | feature request improved cve overview html reports feature request received by email there will be great to mark all flags on the main page count of cves of each status new confirmed mitigated unexplored ignored maybe some colors and lines on product cves diagram also | 1 |
95,158 | 3,934,825,994 | IssuesEvent | 2016-04-26 00:48:36 | music-encoding/music-encoding | https://api.github.com/repos/music-encoding/music-encoding | closed | Usage of saxon:evaluate() in musicxml2mei-3.0.xsl | Component: Tools Priority: High Type: Bug | _From [roewenst...@gmail.com](https://code.google.com/u/107386258860154419675/) on July 31, 2014 10:12:27_
The usage of the Saxon extension function evaluate() forces users to have a Saxon EE license and it doesn't seem to be necessary.
The line 1790 could be rewritten from
\<xsl:value-of select="saxon:evaluate(ancestor::part[attributes/time/beats]/attributes/time/beats)"/>
to
\<xsl:value-of select="ancestor::part[attributes/time/beats]/attributes/time/beats"/>
The lines 1794, 3857, 3861 can be rewritten in the same way.
Cheers, Daniel
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=196_ | 1.0 | Usage of saxon:evaluate() in musicxml2mei-3.0.xsl - _From [roewenst...@gmail.com](https://code.google.com/u/107386258860154419675/) on July 31, 2014 10:12:27_
The usage of the Saxon extension function evaluate() forces users to have a Saxon EE license and it doesn't seem to be necessary.
The line 1790 could be rewritten from
\<xsl:value-of select="saxon:evaluate(ancestor::part[attributes/time/beats]/attributes/time/beats)"/>
to
\<xsl:value-of select="ancestor::part[attributes/time/beats]/attributes/time/beats"/>
The lines 1794, 3857, 3861 can be rewritten in the same way.
Cheers, Daniel
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=196_ | priority | usage of saxon evaluate in xsl from on july the usage of the saxon extension function evaluate forces users to have a saxon ee license and it doesn t seem to be necessary the line could be rewritten from to the lines can be rewritten in the same way cheers daniel original issue | 1 |
626,128 | 19,785,673,087 | IssuesEvent | 2022-01-18 06:17:46 | TeamBookTez/booktez-server | https://api.github.com/repos/TeamBookTez/booktez-server | closed | [bug] 서재에 책 추가하기 비회원 플로우 적용 | 🐞 bug 1️⃣ priority: high | ## 🐞 버그 설명
- postBook api 로직 수정
## 📝 todo
- [ ] postBook api response에 reviewId 추가
- [ ] postBook api 로직에 review create하는 부분 추가
| 1.0 | [bug] 서재에 책 추가하기 비회원 플로우 적용 - ## 🐞 버그 설명
- postBook api 로직 수정
## 📝 todo
- [ ] postBook api response에 reviewId 추가
- [ ] postBook api 로직에 review create하는 부분 추가
| priority | 서재에 책 추가하기 비회원 플로우 적용 🐞 버그 설명 postbook api 로직 수정 📝 todo postbook api response에 reviewid 추가 postbook api 로직에 review create하는 부분 추가 | 1 |
633,783 | 20,265,621,996 | IssuesEvent | 2022-02-15 11:45:40 | scandipwa/scandipwa | https://api.github.com/repos/scandipwa/scandipwa | closed | Price Slider | Type: feature Core Status: Waiting Estimate High Priority Area: Price Area: PLP | **Description**:
Provide Price Slider feature on PLP.
**Context**:
We would like to provide the Price Slider feature on PLP as an extension to be published in the marketplace. It should not be a part of the default theme.
**Expected results**:
The user can define the max and min prices in the layered navigation. Only the products within this range will show up.
**Screenshots**:



| 1.0 | Price Slider - **Description**:
Provide Price Slider feature on PLP.
**Context**:
We would like to provide the Price Slider feature on PLP as an extension to be published in the marketplace. It should not be a part of the default theme.
**Expected results**:
The user can define the max and min prices in the layered navigation. Only the products within this range will show up.
**Screenshots**:



| priority | price slider description provide price slider feature on plp context we would like to provide the price slider feature on plp as an extension to be published in the marketplace it should not be a part of the default theme expected results the user can define the max and min prices in the layered navigation only the products within this range will show up screenshots | 1 |
133,316 | 5,200,797,585 | IssuesEvent | 2017-01-24 01:21:37 | RhoInc/safety-outlier-explorer | https://api.github.com/repos/RhoInc/safety-outlier-explorer | closed | Add Participant Information to details view | high priority | Jack's request for a Participant Profile can be mostly covered by adding participant information (sex, age, ID) to the details list on the drill down for each test.
| 1.0 | Add Participant Information to details view - Jack's request for a Participant Profile can be mostly covered by adding participant information (sex, age, ID) to the details list on the drill down for each test.
| priority | add participant information to details view jack s request for a participant profile can be mostly covered by adding participant information sex age id to the details list on the drill down for each test | 1 |
539,354 | 15,787,285,776 | IssuesEvent | 2021-04-01 18:59:38 | hackforla/expunge-assist | https://api.github.com/repos/hackforla/expunge-assist | closed | create Button Container component | development high priority | Maybe this can be called a `FormFooter` component. This is where the "Back" and "?" and "Next" button, etc is located. | 1.0 | create Button Container component - Maybe this can be called a `FormFooter` component. This is where the "Back" and "?" and "Next" button, etc is located. | priority | create button container component maybe this can be called a formfooter component this is where the back and and next button etc is located | 1 |
323,117 | 9,843,022,085 | IssuesEvent | 2019-06-18 10:36:43 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | closed | Add Default Zone Endpoint for users | Priority: High Sponsored Status: Accepted Type: Feature | **Description**
Store on the user template the last zone it used, allowing it to use the same zone next time it logs in.
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [x] Branch created
- [x] Code committed to development branch
- [x] Testing - QA
- [x] Documentation
- [x] Release notes - resolved issues, compatibility, known issues
- [x] Code committed to upstream release/hotfix branches
- [x] Documentation committed to upstream release/hotfix branches
| 1.0 | Add Default Zone Endpoint for users - **Description**
Store on the user template the last zone it used, allowing it to use the same zone next time it logs in.
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [x] Branch created
- [x] Code committed to development branch
- [x] Testing - QA
- [x] Documentation
- [x] Release notes - resolved issues, compatibility, known issues
- [x] Code committed to upstream release/hotfix branches
- [x] Documentation committed to upstream release/hotfix branches
| priority | add default zone endpoint for users description store on the user template the last zone it used allowing it to use the same zone next time it logs in progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches | 1 |
40,001 | 2,862,035,145 | IssuesEvent | 2015-06-04 00:27:43 | dart-lang/polymer-dart | https://api.github.com/repos/dart-lang/polymer-dart | opened | polymer tests timeout in windows? | bug PolymerMilestone-Later Priority-High | <a href="https://github.com/sigmundch"><img src="https://avatars.githubusercontent.com/u/2049220?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [sigmundch](https://github.com/sigmundch)**
_Originally opened as https://github.com/dart-lang/sdk/issues/13260_
----
Seems like some polymer tests are failing with timeouts in FF and Chrome.
Need to investigate more.
| 1.0 | polymer tests timeout in windows? - <a href="https://github.com/sigmundch"><img src="https://avatars.githubusercontent.com/u/2049220?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [sigmundch](https://github.com/sigmundch)**
_Originally opened as https://github.com/dart-lang/sdk/issues/13260_
----
Seems like some polymer tests are failing with timeouts in FF and Chrome.
Need to investigate more.
| priority | polymer tests timeout in windows issue by originally opened as seems like some polymer tests are failing with timeouts in ff and chrome need to investigate more | 1 |
602,330 | 18,466,939,463 | IssuesEvent | 2021-10-17 03:15:55 | AY2122S1-CS2113T-W12-1/tp | https://api.github.com/repos/AY2122S1-CS2113T-W12-1/tp | opened | Categorize each entry as breakfast, lunch, dinner, or snack. | priority.High type.Enhancement | As a user, I can categorize each entry as breakfast, lunch, dinner, or snack. | 1.0 | Categorize each entry as breakfast, lunch, dinner, or snack. - As a user, I can categorize each entry as breakfast, lunch, dinner, or snack. | priority | categorize each entry as breakfast lunch dinner or snack as a user i can categorize each entry as breakfast lunch dinner or snack | 1 |
222,873 | 7,440,353,041 | IssuesEvent | 2018-03-27 09:48:58 | inspireteam/link-proxy | https://api.github.com/repos/inspireteam/link-proxy | opened | Add global subscribers | enhancement priority/high | Webhook that sends all updates to all links.
Keep in mind that in the future we may add some filters. | 1.0 | Add global subscribers - Webhook that sends all updates to all links.
Keep in mind that in the future we may add some filters. | priority | add global subscribers webhook that sends all updates to all links keep in mind that in the future we may add some filters | 1 |
236,434 | 7,749,213,972 | IssuesEvent | 2018-05-30 10:42:18 | Gloirin/m2gTest | https://api.github.com/repos/Gloirin/m2gTest | closed | 0003550:
If at least one calendar event has a repeat rule set, the sync fails | ActiveSync bug high priority | **Reported by strikegun on 18 Dec 2010 03:45**
**Version:** git master
I tried it on a new database and new user. After I set a event as repeating, the sync fails.
Please make it work.
Thank you
**Steps to reproduce:** create a calendar event with a repeating rule. And sync it with a android 2.2 phone.
| 1.0 | 0003550:
If at least one calendar event has a repeat rule set, the sync fails - **Reported by strikegun on 18 Dec 2010 03:45**
**Version:** git master
I tried it on a new database and new user. After I set a event as repeating, the sync fails.
Please make it work.
Thank you
**Steps to reproduce:** create a calendar event with a repeating rule. And sync it with a android 2.2 phone.
| priority | if at least one calendar event has a repeat rule set the sync fails reported by strikegun on dec version git master i tried it on a new database and new user after i set a event as repeating the sync fails please make it work thank you steps to reproduce create a calendar event with a repeating rule and sync it with a android phone | 1 |
432,769 | 12,498,024,582 | IssuesEvent | 2020-06-01 17:31:56 | trussworks/react-uswds | https://api.github.com/repos/trussworks/react-uswds | closed | [bug] node version upgrade made this incompatible with MilMove | high priority type: bug | **Describe the bug**
Requiring node v12 meant MilMove could no longer use newer versions of ReactUSWDS since MilMove is still using node v10. Since we are not using any features that are exclusive to 12, I think we should expand the `engines` range to include 10 and 12.
https://docs.npmjs.com/files/package.json#engines
| 1.0 | [bug] node version upgrade made this incompatible with MilMove - **Describe the bug**
Requiring node v12 meant MilMove could no longer use newer versions of ReactUSWDS since MilMove is still using node v10. Since we are not using any features that are exclusive to 12, I think we should expand the `engines` range to include 10 and 12.
https://docs.npmjs.com/files/package.json#engines
| priority | node version upgrade made this incompatible with milmove describe the bug requiring node meant milmove could no longer use newer versions of reactuswds since milmove is still using node since we are not using any features that are exclusive to i think we should expand the engines range to include and | 1 |
424,293 | 12,309,115,375 | IssuesEvent | 2020-05-12 08:26:01 | UniversityOfHelsinkiCS/lomake | https://api.github.com/repos/UniversityOfHelsinkiCS/lomake | closed | Lock each text field for editing for one user at a time only | enhancement high priority | test with 10s time out after each ticket | 1.0 | Lock each text field for editing for one user at a time only - test with 10s time out after each ticket | priority | lock each text field for editing for one user at a time only test with time out after each ticket | 1 |
236,292 | 7,748,363,380 | IssuesEvent | 2018-05-30 08:05:37 | Gloirin/m2gTest | https://api.github.com/repos/Gloirin/m2gTest | closed | 0002408:
Plain Text in one Line | Felamimail bug high priority | **Reported by Peterchen on 18 Mar 2010 14:04**
**Version:** git master
It will be just add nl2br when you have the Plaintext View in the Mails!
| 1.0 | 0002408:
Plain Text in one Line - **Reported by Peterchen on 18 Mar 2010 14:04**
**Version:** git master
It will be just add nl2br when you have the Plaintext View in the Mails!
| priority | plain text in one line reported by peterchen on mar version git master it will be just add when you have the plaintext view in the mails | 1 |
772,199 | 27,110,778,638 | IssuesEvent | 2023-02-15 15:10:45 | codersforcauses/wadl | https://api.github.com/repos/codersforcauses/wadl | closed | Create manage divisions page | backend frontend priority::high stage 2 difficulty:extreme | ## Prerequiste
#172
## Basic Information
- [x] Dynamic routing depending on the level
- [x] Ability to add new divisions
- [x] Ability to choose a venue for that division
- [x] Ability to add teams to a division
- [x] modal should pop up
- [x] Teams should have a colour chips depending on the priority that the team has for that venue
- [x] Add divisions/ teams to firebase tournament structure
- [x] Load divisions / teams froms firebase if already exists
- [x] Only update divisions in firebase if they have changes
- [x] Allocate byes to all odd number of teams in a division
This issue requires a few people including someone who knows backend | 1.0 | Create manage divisions page - ## Prerequiste
#172
## Basic Information
- [x] Dynamic routing depending on the level
- [x] Ability to add new divisions
- [x] Ability to choose a venue for that division
- [x] Ability to add teams to a division
- [x] modal should pop up
- [x] Teams should have a colour chips depending on the priority that the team has for that venue
- [x] Add divisions/ teams to firebase tournament structure
- [x] Load divisions / teams froms firebase if already exists
- [x] Only update divisions in firebase if they have changes
- [x] Allocate byes to all odd number of teams in a division
This issue requires a few people including someone who knows backend | priority | create manage divisions page prerequiste basic information dynamic routing depending on the level ability to add new divisions ability to choose a venue for that division ability to add teams to a division modal should pop up teams should have a colour chips depending on the priority that the team has for that venue add divisions teams to firebase tournament structure load divisions teams froms firebase if already exists only update divisions in firebase if they have changes allocate byes to all odd number of teams in a division this issue requires a few people including someone who knows backend | 1 |
719,391 | 24,758,404,305 | IssuesEvent | 2022-10-21 20:17:50 | decline-cookies/anvil-csharp-core | https://api.github.com/repos/decline-cookies/anvil-csharp-core | opened | Implement or Source an `ArraySlice` | effort-high priority-medium status-backlog type-feature | Inspired by Unity's `NativeArraySlice` it would be very helpful to have a `struct` that represents a piece of a standard Array.
Read + Write functionality would be ideal but the ability to read is the most important.
This SO post provides some sample/hint implementations. `IList` is probably a good starting point.
An example application. `TypeExtension.GetReadableNameOfGenericRecursive` would have been much nicer if we could pass a new slice into each recursion rather than tracking the upper bound. | 1.0 | Implement or Source an `ArraySlice` - Inspired by Unity's `NativeArraySlice` it would be very helpful to have a `struct` that represents a piece of a standard Array.
Read + Write functionality would be ideal but the ability to read is the most important.
This SO post provides some sample/hint implementations. `IList` is probably a good starting point.
An example application. `TypeExtension.GetReadableNameOfGenericRecursive` would have been much nicer if we could pass a new slice into each recursion rather than tracking the upper bound. | priority | implement or source an arrayslice inspired by unity s nativearrayslice it would be very helpful to have a struct that represents a piece of a standard array read write functionality would be ideal but the ability to read is the most important this so post provides some sample hint implementations ilist is probably a good starting point an example application typeextension getreadablenameofgenericrecursive would have been much nicer if we could pass a new slice into each recursion rather than tracking the upper bound | 1 |
129,047 | 5,087,780,424 | IssuesEvent | 2016-12-31 09:16:26 | fossasia/gci16.fossasia.org | https://api.github.com/repos/fossasia/gci16.fossasia.org | closed | Thankyou for contribution not displaying names correctly | bug enhancement help wanted Priority: HIGH ui/ux | 
I don't know what that thing is! First the name should be appearing on the top and not below and second the thing should be placed in the Contributors box and not outside it!
If anyone wants to work on this please tell :) | 1.0 | Thankyou for contribution not displaying names correctly - 
I don't know what that thing is! First the name should be appearing on the top and not below and second the thing should be placed in the Contributors box and not outside it!
If anyone wants to work on this please tell :) | priority | thankyou for contribution not displaying names correctly i don t know what that thing is first the name should be appearing on the top and not below and second the thing should be placed in the contributors box and not outside it if anyone wants to work on this please tell | 1 |
409,184 | 11,958,119,591 | IssuesEvent | 2020-04-04 16:53:42 | WEEE-Open/weeelab-telegram-bot | https://api.github.com/repos/WEEE-Open/weeelab-telegram-bot | closed | Store last Telegram chat ID | enhancement high priority | In the LDAP server, for known users. This requires a new schema entry that will be added as soon as needed.
This is to send messages that aren't just responses to users, e.g. to send reminders of /tolab dates 1 hour before it's time. | 1.0 | Store last Telegram chat ID - In the LDAP server, for known users. This requires a new schema entry that will be added as soon as needed.
This is to send messages that aren't just responses to users, e.g. to send reminders of /tolab dates 1 hour before it's time. | priority | store last telegram chat id in the ldap server for known users this requires a new schema entry that will be added as soon as needed this is to send messages that aren t just responses to users e g to send reminders of tolab dates hour before it s time | 1 |
349,124 | 10,459,108,095 | IssuesEvent | 2019-09-20 10:06:06 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Template ResourceAccessControl "default-access" | Affected/5.9.0-Alpha2 Priority/High Severity/Major Type/Improvement config | In order to enable the previous behaviour template ResourceAccessControl "default-access". | 1.0 | Template ResourceAccessControl "default-access" - In order to enable the previous behaviour template ResourceAccessControl "default-access". | priority | template resourceaccesscontrol default access in order to enable the previous behaviour template resourceaccesscontrol default access | 1 |
501,883 | 14,535,721,710 | IssuesEvent | 2020-12-15 06:15:07 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.xvideos.com - site is not usable | browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:84.0) Gecko/20100101 Firefox/84.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63656 -->
**URL**: http://www.xvideos.com/?k=kaya+scodelario+xxx+boobs+sucking
**Browser / Version**: Firefox 84.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
i cant watch video
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201203211213</li><li>channel: aurora</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/eb2804a2-38ae-439d-a956-307d92f3bb85)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.xvideos.com - site is not usable - <!-- @browser: Firefox 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:84.0) Gecko/20100101 Firefox/84.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63656 -->
**URL**: http://www.xvideos.com/?k=kaya+scodelario+xxx+boobs+sucking
**Browser / Version**: Firefox 84.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
i cant watch video
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201203211213</li><li>channel: aurora</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/eb2804a2-38ae-439d-a956-307d92f3bb85)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce i cant watch video browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel aurora hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
684,893 | 23,436,901,611 | IssuesEvent | 2022-08-15 10:55:02 | hypertrons/hypertrons-crx | https://api.github.com/repos/hypertrons/hypertrons-crx | closed | [Bug] tab Perceptor not running | kind/bug priority/high | ### Version
1.6.4
### Current Behavior
tab Perceptor not running
### Expected Behavior
_No response_
### Environment
```markdown
- OS: Windows
- Browser: Edge/Chrome
```
### Any additional comments?
_No response_ | 1.0 | [Bug] tab Perceptor not running - ### Version
1.6.4
### Current Behavior
tab Perceptor not running
### Expected Behavior
_No response_
### Environment
```markdown
- OS: Windows
- Browser: Edge/Chrome
```
### Any additional comments?
_No response_ | priority | tab perceptor not running version current behavior tab perceptor not running expected behavior no response environment markdown os windows browser edge chrome any additional comments no response | 1 |
533,554 | 15,593,298,431 | IssuesEvent | 2021-03-18 12:46:48 | dsccommunity/xDnsServer | https://api.github.com/repos/dsccommunity/xDnsServer | closed | DnsRecordA: New resource proposal | high priority in progress resource proposal | ### Description
In line with #34, a DSCResource for A records should be created. For consistency, the resource should be named `DnsRecordA`.
### Proposed properties
#### Mandatory
- ZoneName: Specifies the name of a DNS zone.
- Name: Specifies the name of a DNS server resource record object.
- IPv4Address: Specifies the IPv4 address of a host.
#### Optional
- DnsServer: Specifies a DNS server. If you do not specify this parameter, the command runs on the local system. You can specify an IP address or any value that resolves to an IP address, such as a fully qualified domain name (FQDN), host name, or NETBIOS name.
- CreatePtr: Indicates that the DNS server automatically creates an associated pointer (PTR) resource record for an A or AAAA record. A PTR resource record maps an IP address to a host name.
- TimeToLive: Specifies the Time to Live (TTL) value, in seconds, for a resource record. Other DNS servers use this length of time to determine how long to cache a record.
- ZoneScope: Specifies the name of a zone scope.
- Ensure: Denotes whether the record should be `Present` or `Absent`
### Special considerations or limitations
Not all mandatory properties need to be key properties. Please review the syntax and purpose of this record type to make the best determination of which properties should be key properties.
All proposed properties above options come straight from the `Add-DnsServerResourceRecord` help syntax with two exceptions:
- `ComputerName` has been changed to `DnsServer`
| 1.0 | DnsRecordA: New resource proposal - ### Description
In line with #34, a DSCResource for A records should be created. For consistency, the resource should be named `DnsRecordA`.
### Proposed properties
#### Mandatory
- ZoneName: Specifies the name of a DNS zone.
- Name: Specifies the name of a DNS server resource record object.
- IPv4Address: Specifies the IPv4 address of a host.
#### Optional
- DnsServer: Specifies a DNS server. If you do not specify this parameter, the command runs on the local system. You can specify an IP address or any value that resolves to an IP address, such as a fully qualified domain name (FQDN), host name, or NETBIOS name.
- CreatePtr: Indicates that the DNS server automatically creates an associated pointer (PTR) resource record for an A or AAAA record. A PTR resource record maps an IP address to a host name.
- TimeToLive: Specifies the Time to Live (TTL) value, in seconds, for a resource record. Other DNS servers use this length of time to determine how long to cache a record.
- ZoneScope: Specifies the name of a zone scope.
- Ensure: Denotes whether the record should be `Present` or `Absent`
### Special considerations or limitations
Not all mandatory properties need to be key properties. Please review the syntax and purpose of this record type to make the best determination of which properties should be key properties.
All proposed properties above options come straight from the `Add-DnsServerResourceRecord` help syntax with two exceptions:
- `ComputerName` has been changed to `DnsServer`
| priority | dnsrecorda new resource proposal description in line with a dscresource for a records should be created for consistency the resource should be named dnsrecorda proposed properties mandatory zonename specifies the name of a dns zone name specifies the name of a dns server resource record object specifies the address of a host optional dnsserver specifies a dns server if you do not specify this parameter the command runs on the local system you can specify an ip address or any value that resolves to an ip address such as a fully qualified domain name fqdn host name or netbios name createptr indicates that the dns server automatically creates an associated pointer ptr resource record for an a or aaaa record a ptr resource record maps an ip address to a host name timetolive specifies the time to live ttl value in seconds for a resource record other dns servers use this length of time to determine how long to cache a record zonescope specifies the name of a zone scope ensure denotes whether the record should be present or absent special considerations or limitations not all mandatory properties need to be key properties please review the syntax and purpose of this record type to make the best determination of which properties should be key properties all proposed properties above options come straight from the add dnsserverresourcerecord help syntax with two exceptions computername has been changed to dnsserver | 1 |
542,213 | 15,857,149,306 | IssuesEvent | 2021-04-08 04:05:13 | wso2/micro-integrator | https://api.github.com/repos/wso2/micro-integrator | closed | [Dashboard] Limit the Size of the Request Body | 4.0.0 Monitoring-Dashboard Priority/Highest Severity/Major | **Description:**
<!-- Give a brief description of the issue -->
Accepting request bodies with unnecessarily large sizes could help attackers to use less connections to achieve Layer 7 DDoS of the webserver. Therefore, we need to limit the size of the request body to each form's requirements. For example, a search form with a 256-char search field should not accept more than 1KB value.
As a suggestion, we can set "org.eclipse.jetty.server.Request.maxFormContentSize" property in the jetty server to limit the size of the request body. For more info: https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | 1.0 | [Dashboard] Limit the Size of the Request Body - **Description:**
<!-- Give a brief description of the issue -->
Accepting request bodies with unnecessarily large sizes could help attackers to use less connections to achieve Layer 7 DDoS of the webserver. Therefore, we need to limit the size of the request body to each form's requirements. For example, a search form with a 256-char search field should not accept more than 1KB value.
As a suggestion, we can set "org.eclipse.jetty.server.Request.maxFormContentSize" property in the jetty server to limit the size of the request body. For more info: https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | priority | limit the size of the request body description accepting request bodies with unnecessarily large sizes could help attackers to use less connections to achieve layer ddos of the webserver therefore we need to limit the size of the request body to each form s requirements for example a search form with a char search field should not accept more than value as a suggestion we can set org eclipse jetty server request maxformcontentsize property in the jetty server to limit the size of the request body for more info suggested labels suggested assignees affected product version os db other environment details and versions steps to reproduce related issues | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.