Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
730,077 | 25,158,354,039 | IssuesEvent | 2022-11-10 15:05:06 | ooni/backend | https://api.github.com/repos/ooni/backend | closed | auth: remove the nickname field | ooni/api priority/medium | We decided that the nickname field has limited usefulness.
The reasons are the following:
- It's confusing to users what this is
- Every time somebody logs in they have to respecify it since it's not stored
- It looks like an identity (it's called nickname), but it's not unique and for the same email can change
- It's more complexity for not so much benefit | 1.0 | auth: remove the nickname field - We decided that the nickname field has limited usefulness.
The reasons are the following:
- It's confusing to users what this is
- Every time somebody logs in they have to respecify it since it's not stored
- It looks like an identity (it's called nickname), but it's not unique and for the same email can change
- It's more complexity for not so much benefit | priority | auth remove the nickname field we decided that the nickname field has limited usefulness the reasons are the following it s confusing to users what this is every time somebody logs in they have to respecify it since it s not stored it looks like an identity it s called nickname but it s not unique and for the same email can change it s more complexity for not so much benefit | 1 |
181,912 | 6,665,196,397 | IssuesEvent | 2017-10-02 23:33:23 | classifiedz/classifiedz.github.io | https://api.github.com/repos/classifiedz/classifiedz.github.io | opened | On homepage, show ads that have been created most recently first + limit to 25 ads (implement pagination in another sprint). | Low Priority Low Risk Medium Priority | This can be fixed in HomeController as seen here
Resource
https://laravel.com/docs/5.5/eloquent | 2.0 | On homepage, show ads that have been created most recently first + limit to 25 ads (implement pagination in another sprint). - This can be fixed in HomeController as seen here
Resource
https://laravel.com/docs/5.5/eloquent | priority | on homepage show ads that have been created most recently first limit to ads implement pagination in another sprint this can be fixed in homecontroller as seen here resource | 1 |
499,082 | 14,439,772,805 | IssuesEvent | 2020-12-07 14:47:15 | projectdissolve/dissolve | https://api.github.com/repos/projectdissolve/dissolve | opened | Epic / Documentation 0.7 | Priority: Medium | ### Focus
Provide basic information on all aspects in the code to accompany 0.7 release.
### Tasks
- [ ] #205
- [ ] #209
- [ ] #204
- [ ] #197
- [ ] #172
- [ ] #176
- [ ] #180
- [ ] #182
- [ ] #184
| 1.0 | Epic / Documentation 0.7 - ### Focus
Provide basic information on all aspects in the code to accompany 0.7 release.
### Tasks
- [ ] #205
- [ ] #209
- [ ] #204
- [ ] #197
- [ ] #172
- [ ] #176
- [ ] #180
- [ ] #182
- [ ] #184
| priority | epic documentation focus provide basic information on all aspects in the code to accompany release tasks | 1 |
141,502 | 5,437,040,122 | IssuesEvent | 2017-03-06 04:44:54 | CS2103JAN2017-W10-B4/main | https://api.github.com/repos/CS2103JAN2017-W10-B4/main | opened | As a user I can search for event/deadline/task by attribute | priority.medium type.story | ... so that I can find out details on specific event/deadline/task | 1.0 | As a user I can search for event/deadline/task by attribute - ... so that I can find out details on specific event/deadline/task | priority | as a user i can search for event deadline task by attribute so that i can find out details on specific event deadline task | 1 |
98,904 | 4,039,403,395 | IssuesEvent | 2016-05-20 04:36:01 | shelljs/shelljs | https://api.github.com/repos/shelljs/shelljs | reopened | Plugin System. | feat medium priority question refactor | **[NOTE: This thread was originally about the `open` command. I'm hijacking it. -@ariporad]**
It would be nice to have a plugin system for shelljs which allows custom commands. | 1.0 | Plugin System. - **[NOTE: This thread was originally about the `open` command. I'm hijacking it. -@ariporad]**
It would be nice to have a plugin system for shelljs which allows custom commands. | priority | plugin system it would be nice to have a plugin system for shelljs which allows custom commands | 1 |
55,445 | 3,073,445,120 | IssuesEvent | 2015-08-19 22:00:20 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Add more getXXX(int resID) & clickOnXXX(int resID) methods | bug imported Priority-Medium wontfix | _From [courag...@gmail.com](https://code.google.com/u/111939886635846010704/) on December 05, 2012 16:11:40_
It would be nice if we have more of those getXXX() & clickOnXXX() methods by Resource ID as parameter so that it's easier for us to pin-point exactly which visual controls we are referencing to. Especially when now considering Fragments and I may have multiple Fragments that contain multiple ListView and now clickInList() don't quite work for the 2nd ListView. I have to developed something to work around that (searchText(), scrollUp or scrollDown to find the item and then click()).
Robotium is just a great addition on the Instrumentation. Many thanks for all of your great effort.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=363_ | 1.0 | Add more getXXX(int resID) & clickOnXXX(int resID) methods - _From [courag...@gmail.com](https://code.google.com/u/111939886635846010704/) on December 05, 2012 16:11:40_
It would be nice if we have more of those getXXX() & clickOnXXX() methods by Resource ID as parameter so that it's easier for us to pin-point exactly which visual controls we are referencing to. Especially when now considering Fragments and I may have multiple Fragments that contain multiple ListView and now clickInList() don't quite work for the 2nd ListView. I have to developed something to work around that (searchText(), scrollUp or scrollDown to find the item and then click()).
Robotium is just a great addition on the Instrumentation. Many thanks for all of your great effort.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=363_ | priority | add more getxxx int resid clickonxxx int resid methods from on december it would be nice if we have more of those getxxx clickonxxx methods by resource id as parameter so that it s easier for us to pin point exactly which visual controls we are referencing to especially when now considering fragments and i may have multiple fragments that contain multiple listview and now clickinlist don t quite work for the listview i have to developed something to work around that searchtext scrollup or scrolldown to find the item and then click robotium is just a great addition on the instrumentation many thanks for all of your great effort original issue | 1 |
103,988 | 4,188,185,482 | IssuesEvent | 2016-06-23 19:54:01 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Make sure we're using Sort blocks where applicable | Improvement Low-Hanging Fruit Priority: Medium | Many Spice IA's sort their results but aren't using a sorting block, which they really should.
A quick check shows several Spice's that aren't correctly sorting their results:
- [x] Airlines
- [ ] Book
- [ ] Detect Lang
- [x] People In Space
- [ ] DNS
- [ ] Bootic
- [x] GitHub
- [x] Congress
- [x] Recipes
```js
share/spice/book/book.js
67: recommendedBy.sort(function(){return (4*Math.random()>2)?1:-1});
share/spice/bootic/bootic.js
14: for(var i = 0; i < api_result.sorted.length; i++) {
15: result.push(api_result.products[api_result.sorted[i]]);
share/spice/detect_lang/detect_lang.js
21: api_result.data.detections.sort(function(a, b) {
share/spice/dns/dns.js
10: api_result.response.records.sort(function(a, b) {
share/spice/recipes/recipes.js
109: return sparse.sort(function(a,b){
``` | 1.0 | Make sure we're using Sort blocks where applicable - Many Spice IA's sort their results but aren't using a sorting block, which they really should.
A quick check shows several Spice's that aren't correctly sorting their results:
- [x] Airlines
- [ ] Book
- [ ] Detect Lang
- [x] People In Space
- [ ] DNS
- [ ] Bootic
- [x] GitHub
- [x] Congress
- [x] Recipes
```js
share/spice/book/book.js
67: recommendedBy.sort(function(){return (4*Math.random()>2)?1:-1});
share/spice/bootic/bootic.js
14: for(var i = 0; i < api_result.sorted.length; i++) {
15: result.push(api_result.products[api_result.sorted[i]]);
share/spice/detect_lang/detect_lang.js
21: api_result.data.detections.sort(function(a, b) {
share/spice/dns/dns.js
10: api_result.response.records.sort(function(a, b) {
share/spice/recipes/recipes.js
109: return sparse.sort(function(a,b){
``` | priority | make sure we re using sort blocks where applicable many spice ia s sort their results but aren t using a sorting block which they really should a quick check shows several spice s that aren t correctly sorting their results airlines book detect lang people in space dns bootic github congress recipes js share spice book book js recommendedby sort function return math random share spice bootic bootic js for var i i api result sorted length i result push api result products share spice detect lang detect lang js api result data detections sort function a b share spice dns dns js api result response records sort function a b share spice recipes recipes js return sparse sort function a b | 1 |
467,269 | 13,444,815,970 | IssuesEvent | 2020-09-08 10:22:39 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [9.0 staging-1756] Faulty tooltips in Host Private World menu | Category: UI Priority: Medium | - All tooltips in the custom settings to host a private world have the same title: "Specialty Cost Multiplier"
- The tooltip descriptions for Craft Resource Multiplier and Craft Time Multiplier are reversed.


| 1.0 | [9.0 staging-1756] Faulty tooltips in Host Private World menu - - All tooltips in the custom settings to host a private world have the same title: "Specialty Cost Multiplier"
- The tooltip descriptions for Craft Resource Multiplier and Craft Time Multiplier are reversed.


| priority | faulty tooltips in host private world menu all tooltips in the custom settings to host a private world have the same title specialty cost multiplier the tooltip descriptions for craft resource multiplier and craft time multiplier are reversed | 1 |
137,292 | 5,301,349,593 | IssuesEvent | 2017-02-10 09:18:56 | HBHWoolacotts/RPii | https://api.github.com/repos/HBHWoolacotts/RPii | closed | Booking a New Service Job allows you to change the Customer Surname | FIXED - HBH Live Priority - Medium | When you book a new service job and get the customer list screen, clicking the Edit button allows you to change the surname. Please can this be prevented, as the customer account disappears from searches when surnames don't match the code.

 | 1.0 | Booking a New Service Job allows you to change the Customer Surname - When you book a new service job and get the customer list screen, clicking the Edit button allows you to change the surname. Please can this be prevented, as the customer account disappears from searches when surnames don't match the code.

 | priority | booking a new service job allows you to change the customer surname when you book a new service job and get the customer list screen clicking the edit button allows you to change the surname please can this be prevented as the customer account disappears from searches when surnames don t match the code | 1 |
782,977 | 27,512,961,458 | IssuesEvent | 2023-03-06 10:05:01 | sjdv1982/seamless | https://api.github.com/repos/sjdv1982/seamless | closed | Imperative transformations inside transformer code | new feature medium priority | Allow transformations to be launched within transformer code (distinct from #130)
To avoid vendor lock-in, such an API should not be called Seamless. Future libraries should be free to re-implement the API, and as long as they stick to SHA3-256 (and Seamless's scheme of semantic checksumming), their transformations should federate with Seamless.
Possible syntax:
`tf.py`
```python
from reproducible import transformer
@transformer
def func(a, b):
return mymodule.THOUSAND + a + b
func.mymodule = mymodule
result = []
for a, b in args:
subresult = func(a, b)
result.append(subresult)
```
`main workflow`
```python
ctx.tf = Transformer()
ctx.mymodule = Module()
ctx.tf.code.mount("tf.py")
ctx.tf.mymodule = ctx.mymodule
ctx.tf.args = [(1,2), (3,4), (5,6)]
ctx.compute()
ctx.result
```
`[1003, 1007, 1011]`
Every tf.py execution library (Seamless, or something else) is free to insert `reproducible` in any of three modes:
- dummy. The transformer decorator does nothing
- fat. Essentially, invoke `seamless.run_transformation` locally, after calculating the checksums. Requires read-only database.
- thin. Essentially, forward the job to jobless, after calculating the checksums. Requires read+write database and jobless.
`ctx.tf.meta` can declare resources for each of the three modes.
| 1.0 | Imperative transformations inside transformer code - Allow transformations to be launched within transformer code (distinct from #130)
To avoid vendor lock-in, such an API should not be called Seamless. Future libraries should be free to re-implement the API, and as long as they stick to SHA3-256 (and Seamless's scheme of semantic checksumming), their transformations should federate with Seamless.
Possible syntax:
`tf.py`
```python
from reproducible import transformer
@transformer
def func(a, b):
return mymodule.THOUSAND + a + b
func.mymodule = mymodule
result = []
for a, b in args:
subresult = func(a, b)
result.append(subresult)
```
`main workflow`
```python
ctx.tf = Transformer()
ctx.mymodule = Module()
ctx.tf.code.mount("tf.py")
ctx.tf.mymodule = ctx.mymodule
ctx.tf.args = [(1,2), (3,4), (5,6)]
ctx.compute()
ctx.result
```
`[1003, 1007, 1011]`
Every tf.py execution library (Seamless, or something else) is free to insert `reproducible` in any of three modes:
- dummy. The transformer decorator does nothing
- fat. Essentially, invoke `seamless.run_transformation` locally, after calculating the checksums. Requires read-only database.
- thin. Essentially, forward the job to jobless, after calculating the checksums. Requires read+write database and jobless.
`ctx.tf.meta` can declare resources for each of the three modes.
| priority | imperative transformations inside transformer code allow transformations to be launched within transformer code distinct from to avoid vendor lock in such an api should not be called seamless future libraries should be free to re implement the api and as long as they stick to and seamless s scheme of semantic checksumming their transformations should federate with seamless possible syntax tf py python from reproducible import transformer transformer def func a b return mymodule thousand a b func mymodule mymodule result for a b in args subresult func a b result append subresult main workflow python ctx tf transformer ctx mymodule module ctx tf code mount tf py ctx tf mymodule ctx mymodule ctx tf args ctx compute ctx result every tf py execution library seamless or something else is free to insert reproducible in any of three modes dummy the transformer decorator does nothing fat essentially invoke seamless run transformation locally after calculating the checksums requires read only database thin essentially forward the job to jobless after calculating the checksums requires read write database and jobless ctx tf meta can declare resources for each of the three modes | 1 |
543,028 | 15,876,520,819 | IssuesEvent | 2021-04-09 08:28:38 | eclipse/dirigible | https://api.github.com/repos/eclipse/dirigible | opened | [IDE] Monaco - Add support for SignatureHelpProvider | component-ide efforts-medium enhancement priority-medium usability web-ide | Add support for [SignatureHelpProvider ](https://microsoft.github.io/monaco-editor/api/interfaces/monaco.languages.signaturehelpprovider.html#providesignaturehelp)
Sample: https://jsfiddle.net/hec12da1/
Related Monaco Issues:
- https://github.com/microsoft/monaco-editor/issues/243
- https://github.com/microsoft/monaco-editor/issues/1145 | 1.0 | [IDE] Monaco - Add support for SignatureHelpProvider - Add support for [SignatureHelpProvider ](https://microsoft.github.io/monaco-editor/api/interfaces/monaco.languages.signaturehelpprovider.html#providesignaturehelp)
Sample: https://jsfiddle.net/hec12da1/
Related Monaco Issues:
- https://github.com/microsoft/monaco-editor/issues/243
- https://github.com/microsoft/monaco-editor/issues/1145 | priority | monaco add support for signaturehelpprovider add support for sample related monaco issues | 1 |
109,029 | 4,366,561,996 | IssuesEvent | 2016-08-03 14:41:29 | LearningLocker/learninglocker | https://api.github.com/repos/LearningLocker/learninglocker | closed | Is there a way to do a webhook per LRS? | priority:medium status:confirmed type:question | Like this: https://github.com/LearningLocker/learninglocker/settings/hooks/new
our client have an internal progression tracking system, so just wonder if LRS can do a webhook to other services?
E.g.: listen on statement verb
```
On 'passed' verb
Make a request with custom payload to designated service
On 'completed' verb
Make a request with custom payload to designated service
```
Seems to be not at the moment, but would we have something similar to that in the roadmap?
Thank you
| 1.0 | Is there a way to do a webhook per LRS? - Like this: https://github.com/LearningLocker/learninglocker/settings/hooks/new
our client have an internal progression tracking system, so just wonder if LRS can do a webhook to other services?
E.g.: listen on statement verb
```
On 'passed' verb
Make a request with custom payload to designated service
On 'completed' verb
Make a request with custom payload to designated service
```
Seems to be not at the moment, but would we have something similar to that in the roadmap?
Thank you
| priority | is there a way to do a webhook per lrs like this our client have an internal progression tracking system so just wonder if lrs can do a webhook to other services e g listen on statement verb on passed verb make a request with custom payload to designated service on completed verb make a request with custom payload to designated service seems to be not at the moment but would we have something similar to that in the roadmap thank you | 1 |
686,650 | 23,500,072,531 | IssuesEvent | 2022-08-18 07:36:24 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YCQL] Support additional bind formats for multi-column IN clause | kind/enhancement priority/medium area/ycql | Jira Link: [DB-2990](https://yugabyte.atlassian.net/browse/DB-2990)
### Description
With https://github.com/yugabyte/yugabyte-db/issues/12938, we added support for the multi-column IN clause. It added support for the below bind format:
```
SELECT ... WHERE (r1, r2) IN ((?, ?), (?, ?));
```
The task is to support the below formats as well:
1. `(r1, r2) IN (?, ?)`
2. `(r1, r2) IN ?` | 1.0 | [YCQL] Support additional bind formats for multi-column IN clause - Jira Link: [DB-2990](https://yugabyte.atlassian.net/browse/DB-2990)
### Description
With https://github.com/yugabyte/yugabyte-db/issues/12938, we added support for the multi-column IN clause. It added support for the below bind format:
```
SELECT ... WHERE (r1, r2) IN ((?, ?), (?, ?));
```
The task is to support the below formats as well:
1. `(r1, r2) IN (?, ?)`
2. `(r1, r2) IN ?` | priority | support additional bind formats for multi column in clause jira link description with we added support for the multi column in clause it added support for the below bind format select where in the task is to support the below formats as well in in | 1 |
249,154 | 7,953,925,342 | IssuesEvent | 2018-07-12 04:49:56 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: Stuck inside a players building | Medium Priority | **Version:** 0.7.3.3 beta
**Steps to Reproduce:**
Log off close to a players building
Log back on.
**Expected behavior:**
I expected to log back into the game outside a players building where I was standing at the time
**Actual behavior:**
I spawned inside a players building instead | 1.0 | USER ISSUE: Stuck inside a players building - **Version:** 0.7.3.3 beta
**Steps to Reproduce:**
Log off close to a players building
Log back on.
**Expected behavior:**
I expected to log back into the game outside a players building where I was standing at the time
**Actual behavior:**
I spawned inside a players building instead | priority | user issue stuck inside a players building version beta steps to reproduce log off close to a players building log back on expected behavior i expected to log back into the game outside a players building where i was standing at the time actual behavior i spawned inside a players building instead | 1 |
704,939 | 24,215,330,650 | IssuesEvent | 2022-09-26 06:05:01 | OpenMined/PySyft | https://api.github.com/repos/OpenMined/PySyft | closed | Hagrid launch show docker compose version instead of docker version | Type: Bug :bug: Priority: 3 - Medium :unamused: PwP | ## Description
A clear and concise description of the bug.
## How to Reproduce
1. hagrid launch domain_name to docker:8081 --tail=false --tag=latest --silent
2. See the Docker version printed. It prints the docker compose version instead of docker version
## Expected Behavior
A clear and concise description of what you expected to happen.
The text should either say the Docker Compose Version, instead of docker version or it should print docker version instead of docker compose version.
## Screenshots
If applicable, add screenshots to help explain your problem.
<img width="2167" alt="screenshot-mismatched-docker-version-printing" src="https://user-images.githubusercontent.com/11032835/191921254-8aba7022-d84f-4f7c-aa4f-9dd883a6413d.png">
## System Information
- OS: [e.g. iOS] MacOS
- OS Version: [e.g. 22] M1
- Language Version: [e.g. Python 3.7, Node 10.18.1]
- Package Manager Version: [e.g. Conda 4.6.1, NPM 6.14.1]
- Browser (if applicable): [e.g. Google Chrome]
- Browser Version (if applicable): [e.g. 81.0.4044.138]
## Additional Context
Add any other context about the problem here.
| 1.0 | Hagrid launch show docker compose version instead of docker version - ## Description
A clear and concise description of the bug.
## How to Reproduce
1. hagrid launch domain_name to docker:8081 --tail=false --tag=latest --silent
2. See the Docker version printed. It prints the docker compose version instead of docker version
## Expected Behavior
A clear and concise description of what you expected to happen.
The text should either say the Docker Compose Version, instead of docker version or it should print docker version instead of docker compose version.
## Screenshots
If applicable, add screenshots to help explain your problem.
<img width="2167" alt="screenshot-mismatched-docker-version-printing" src="https://user-images.githubusercontent.com/11032835/191921254-8aba7022-d84f-4f7c-aa4f-9dd883a6413d.png">
## System Information
- OS: [e.g. iOS] MacOS
- OS Version: [e.g. 22] M1
- Language Version: [e.g. Python 3.7, Node 10.18.1]
- Package Manager Version: [e.g. Conda 4.6.1, NPM 6.14.1]
- Browser (if applicable): [e.g. Google Chrome]
- Browser Version (if applicable): [e.g. 81.0.4044.138]
## Additional Context
Add any other context about the problem here.
| priority | hagrid launch show docker compose version instead of docker version description a clear and concise description of the bug how to reproduce hagrid launch domain name to docker tail false tag latest silent see the docker version printed it prints the docker compose version instead of docker version expected behavior a clear and concise description of what you expected to happen the text should either say the docker compose version instead of docker version or it should print docker version instead of docker compose version screenshots if applicable add screenshots to help explain your problem img width alt screenshot mismatched docker version printing src system information os macos os version language version package manager version browser if applicable browser version if applicable additional context add any other context about the problem here | 1 |
357,501 | 10,607,546,473 | IssuesEvent | 2019-10-11 04:17:08 | canonical-web-and-design/maas-ui | https://api.github.com/repos/canonical-web-and-design/maas-ui | closed | Provide UI feedback if websocket disconnects & reconnects | Enhancement ✨ Priority: Medium | Having implemented `reconnecting-websocket` in https://github.com/canonical-web-and-design/maas-ui/pull/192 it would be nice if we also provided some UI feedback when the connection drops and reconnects. | 1.0 | Provide UI feedback if websocket disconnects & reconnects - Having implemented `reconnecting-websocket` in https://github.com/canonical-web-and-design/maas-ui/pull/192 it would be nice if we also provided some UI feedback when the connection drops and reconnects. | priority | provide ui feedback if websocket disconnects reconnects having implemented reconnecting websocket in it would be nice if we also provided some ui feedback when the connection drops and reconnects | 1 |
623,064 | 19,660,313,028 | IssuesEvent | 2022-01-10 16:22:51 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | Add a profile type dropdown in the Profile Search form filed option | priority-medium feature Stale | **Describe the bug**
Add a profile type dropdown in the Profile Search form filed option. So users can search members more effectively and narrow down the search result.
**Screenshots**
https://prnt.sc/
**Jira issue** : [PROD-926]
[PROD-926]: https://buddyboss.atlassian.net/browse/PROD-926?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | Add a profile type dropdown in the Profile Search form filed option - **Describe the bug**
Add a profile type dropdown in the Profile Search form filed option. So users can search members more effectively and narrow down the search result.
**Screenshots**
https://prnt.sc/
**Jira issue** : [PROD-926]
[PROD-926]: https://buddyboss.atlassian.net/browse/PROD-926?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | add a profile type dropdown in the profile search form filed option describe the bug add a profile type dropdown in the profile search form filed option so users can search members more effectively and narrow down the search result screenshots jira issue | 1 |
690,789 | 23,672,385,580 | IssuesEvent | 2022-08-27 15:06:26 | ArjunSharda/TimeConv | https://api.github.com/repos/ArjunSharda/TimeConv | closed | Credits page on mobile has text below footer | bug help wanted Priority: Medium | Credits page on mobile has some text below the footer. I have tried fixing it, but need help with solving it. Any help would be greatly appreciated. | 1.0 | Credits page on mobile has text below footer - Credits page on mobile has some text below the footer. I have tried fixing it, but need help with solving it. Any help would be greatly appreciated. | priority | credits page on mobile has text below footer credits page on mobile has some text below the footer i have tried fixing it but need help with solving it any help would be greatly appreciated | 1 |
471,375 | 13,565,764,303 | IssuesEvent | 2020-09-18 12:15:56 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1722] Incorrect progress display of Work Orders | Category: Localization Category: UI Priority: Medium Status: Fixed Status: Reopen | When using a translation (currently Russian) with a long text, the progress bar and buttons are displayed incorrectly

Here's how it looks in the original

| 1.0 | [0.9.0 staging-1722] Incorrect progress display of Work Orders - When using a translation (currently Russian) with a long text, the progress bar and buttons are displayed incorrectly

Here's how it looks in the original

| priority | incorrect progress display of work orders when using a translation currently russian with a long text the progress bar and buttons are displayed incorrectly here s how it looks in the original | 1 |
655,376 | 21,687,641,930 | IssuesEvent | 2022-05-09 12:50:13 | SimplyVC/panic | https://api.github.com/repos/SimplyVC/panic | opened | Installation wizard - Node setup test buttons - Substrate base-chain | UI iteration 2 Priority: Medium | ### User Story
As a node operator, I want to be able to verify that the Node exporter URL is correctly configured before my settings are finalised.
### Description
The scope of this task is **limited exclusively to the frontend aspect** of the test button associated with the nodes setup step - see #48.
### Requirements
Introduce a Test button for the Node exporter URL.
See https://172.16.152.17:8000/
### Blocked by
Not dependent on another ticket to be tackled.
### Acceptance criteria
**Scenario**: Node operator is using the installation wizard to setup monitoring and alerting for a Substrate chain
**Given**: The node operator is in the Nodes setup step
**Then**: The node operator is presented with a Test button corresponding to the Node exporter URL per node he wants to set up.
| 1.0 | Installation wizard - Node setup test buttons - Substrate base-chain - ### User Story
As a node operator, I want to be able to verify that the Node exporter URL is correctly configured before my settings are finalised.
### Description
The scope of this task is **limited exclusively to the frontend aspect** of the test button associated with the nodes setup step - see #48.
### Requirements
Introduce a Test button for the Node exporter URL.
See https://172.16.152.17:8000/
### Blocked by
Not dependent on another ticket to be tackled.
### Acceptance criteria
**Scenario**: Node operator is using the installation wizard to setup monitoring and alerting for a Substrate chain
**Given**: The node operator is in the Nodes setup step
**Then**: The node operator is presented with a Test button corresponding to the Node exporter URL per node he wants to set up.
| priority | installation wizard node setup test buttons substrate base chain user story as a node operator i want to be able to verify that the node exporter url is correctly configured before my settings are finalised description the scope of this task is limited exclusively to the frontend aspect of the test button associated with the nodes setup step see requirements introduce a test button for the node exporter url see blocked by not dependent on another ticket to be tackled acceptance criteria scenario node operator is using the installation wizard to setup monitoring and alerting for a substrate chain given the node operator is in the nodes setup step then the node operator is presented with a test button corresponding to the node exporter url per node he wants to set up | 1 |
407,949 | 11,939,895,224 | IssuesEvent | 2020-04-02 15:51:36 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | launching jupyter notebook on tutorials errors out | configuration: linux priority: medium team: kitware type: bug | Testing #12646 locally on my ubuntu machine demonstrated a failure to launch jupyter:
```
russt@Puget-179850-01:~/drake/tutorials$ bazel run rendering_multibody_plant
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 4d6b6aea-e675-4fa1-b8e4-4cde0f9435dd
INFO: Analyzed target //tutorials:rendering_multibody_plant (197 packages loaded, 22541 targets configured).
INFO: Found 1 target...
INFO: Deleting stale sandbox base /home/russt/.cache/bazel/_bazel_russt/a0bdb2099cb05916281dea472bfce61b/sandbox
Target //tutorials:rendering_multibody_plant up-to-date:
bazel-bin/tutorials/rendering_multibody_plant_jupyter_py_main.py
bazel-bin/tutorials/rendering_multibody_plant
INFO: Elapsed time: 253.006s, Critical Path: 184.69s
INFO: 428 processes: 428 linux-sandbox.
INFO: Build completed successfully, 449 total actions
INFO: Build completed successfully, 449 total actions
Running notebook interactively
[I 07:09:00.637 NotebookApp] Serving notebooks from local directory: /home/russt/.cache/bazel/_bazel_russt/a0bdb2099cb05916281dea472bfce61b/execroot/drake/bazel-out/k8-opt/bin/tutorials/rendering_multibody_plant.runfiles/drake/tutorials
[I 07:09:00.638 NotebookApp] The Jupyter Notebook is running at:
[I 07:09:00.638 NotebookApp] http://localhost:8888/?token=09ba3881df211e8e63672b610c39c30bd16750bae3e9422c
[I 07:09:00.638 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1784, in start
self.launch_browser()
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1742, in launch_browser
with open(fd, 'w', encoding='utf-8') as fh:
TypeError: coercing to Unicode: need string or buffer, int found
```
The same error occurs when I try any of the other examples.
```
russt@Puget-179850-01:~/drake/tutorials$ jupyter notebook --version
5.7.4
```
I've also rerun `install_prereqs` and confirmed that this does not fix anything. | 1.0 | launching jupyter notebook on tutorials errors out - Testing #12646 locally on my ubuntu machine demonstrated a failure to launch jupyter:
```
russt@Puget-179850-01:~/drake/tutorials$ bazel run rendering_multibody_plant
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 4d6b6aea-e675-4fa1-b8e4-4cde0f9435dd
INFO: Analyzed target //tutorials:rendering_multibody_plant (197 packages loaded, 22541 targets configured).
INFO: Found 1 target...
INFO: Deleting stale sandbox base /home/russt/.cache/bazel/_bazel_russt/a0bdb2099cb05916281dea472bfce61b/sandbox
Target //tutorials:rendering_multibody_plant up-to-date:
bazel-bin/tutorials/rendering_multibody_plant_jupyter_py_main.py
bazel-bin/tutorials/rendering_multibody_plant
INFO: Elapsed time: 253.006s, Critical Path: 184.69s
INFO: 428 processes: 428 linux-sandbox.
INFO: Build completed successfully, 449 total actions
INFO: Build completed successfully, 449 total actions
Running notebook interactively
[I 07:09:00.637 NotebookApp] Serving notebooks from local directory: /home/russt/.cache/bazel/_bazel_russt/a0bdb2099cb05916281dea472bfce61b/execroot/drake/bazel-out/k8-opt/bin/tutorials/rendering_multibody_plant.runfiles/drake/tutorials
[I 07:09:00.638 NotebookApp] The Jupyter Notebook is running at:
[I 07:09:00.638 NotebookApp] http://localhost:8888/?token=09ba3881df211e8e63672b610c39c30bd16750bae3e9422c
[I 07:09:00.638 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1784, in start
self.launch_browser()
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1742, in launch_browser
with open(fd, 'w', encoding='utf-8') as fh:
TypeError: coercing to Unicode: need string or buffer, int found
```
The same error occurs when I try any of the other examples.
```
russt@Puget-179850-01:~/drake/tutorials$ jupyter notebook --version
5.7.4
```
I've also rerun `install_prereqs` and confirmed that this does not fix anything. | priority | launching jupyter notebook on tutorials errors out testing locally on my ubuntu machine demonstrated a failure to launch jupyter russt puget drake tutorials bazel run rendering multibody plant starting local bazel server and connecting to it info invocation id info analyzed target tutorials rendering multibody plant packages loaded targets configured info found target info deleting stale sandbox base home russt cache bazel bazel russt sandbox target tutorials rendering multibody plant up to date bazel bin tutorials rendering multibody plant jupyter py main py bazel bin tutorials rendering multibody plant info elapsed time critical path info processes linux sandbox info build completed successfully total actions info build completed successfully total actions running notebook interactively serving notebooks from local directory home russt cache bazel bazel russt execroot drake bazel out opt bin tutorials rendering multibody plant runfiles drake tutorials the jupyter notebook is running at use control c to stop this server and shut down all kernels twice to skip confirmation traceback most recent call last file usr local bin jupyter notebook line in sys exit main file usr local lib dist packages jupyter core application py line in launch instance return super jupyterapp cls launch instance argv argv kwargs file usr local lib dist packages traitlets config application py line in launch instance app start file usr local lib dist packages notebook notebookapp py line in start self launch browser file usr local lib dist packages notebook notebookapp py line in launch browser with open fd w encoding utf as fh typeerror coercing to unicode need string or buffer int found the same error occurs when i try any of the other examples russt puget drake tutorials jupyter notebook version i ve also rerun install prereqs and confirmed that this does not fix anything | 1 |
207,848 | 7,134,205,348 | IssuesEvent | 2018-01-22 20:02:06 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Economy viewer contracts tooltips not displaying | Medium Priority | In economy viewer the contracts tooltips don't display at all. Sometimes they will work if the contract window is also opened, but it's very inconsistent. | 1.0 | Economy viewer contracts tooltips not displaying - In economy viewer the contracts tooltips don't display at all. Sometimes they will work if the contract window is also opened, but it's very inconsistent. | priority | economy viewer contracts tooltips not displaying in economy viewer the contracts tooltips don t display at all sometimes they will work if the contract window is also opened but it s very inconsistent | 1 |
461,891 | 13,237,893,040 | IssuesEvent | 2020-08-18 22:45:49 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Activity page Document file name overlapping on Firefox web browser | bug priority: medium | **Describe the bug**
Overlapping text when the document file name is too long on firefox web browse only.
I have provided a CSS code to client to fix, but this is an issue in our platform so I considered posting this issue here.
CSS code that I provided to the client to fix the issue:
.bb-activity-media-wrap .bb-activity-media-elem.document-activity .document-description-wrap .document-detail-wrap{
flex-basis:auto;
}
**To Reproduce**
Steps to reproduce the behavior:
1. Open Firefox Web Browser
2. Upload a document that has a long file name
**Screencast**
https://drive.google.com/file/d/1dEdZFUpzUFlMCxUsUCJggSsbicCHjcf_/view
**Expected behavior**
The text should not overlap
**Screenshots**

**Support ticket links**
https://secure.helpscout.net/conversation/1256395201/88666
| 1.0 | Activity page Document file name overlapping on Firefox web browser - **Describe the bug**
Overlapping text when the document file name is too long on firefox web browse only.
I have provided a CSS code to client to fix, but this is an issue in our platform so I considered posting this issue here.
CSS code that I provided to the client to fix the issue:
.bb-activity-media-wrap .bb-activity-media-elem.document-activity .document-description-wrap .document-detail-wrap{
flex-basis:auto;
}
**To Reproduce**
Steps to reproduce the behavior:
1. Open Firefox Web Browser
2. Upload a document that has a long file name
**Screencast**
https://drive.google.com/file/d/1dEdZFUpzUFlMCxUsUCJggSsbicCHjcf_/view
**Expected behavior**
The text should not overlap
**Screenshots**

**Support ticket links**
https://secure.helpscout.net/conversation/1256395201/88666
| priority | activity page document file name overlapping on firefox web browser describe the bug overlapping text when the document file name is too long on firefox web browse only i have provided a css code to client to fix but this is an issue in our platform so i considered posting this issue here css code that i provided to the client to fix the issue bb activity media wrap bb activity media elem document activity document description wrap document detail wrap flex basis auto to reproduce steps to reproduce the behavior open firefox web browser upload a document that has a long file name screencast expected behavior the text should not overlap screenshots support ticket links | 1 |
24,223 | 2,667,010,872 | IssuesEvent | 2015-03-22 04:45:54 | NewCreature/EOF | https://api.github.com/repos/NewCreature/EOF | closed | I deleted all lyrics, and EOF replaced them with strange text such as "4u^^g^" repeating in the lyric preview in the 3d panel. | bug imported Priority-Medium | _From [xander4j...@yahoo.com](https://code.google.com/u/111302640723734240985/) on May 10, 2010 01:19:40_
I deleted all lyrics, and EOF replaced them with strange text such as
"4u^^g^" repeating in the lyric preview in the 3d panel.
_Original issue: http://code.google.com/p/editor-on-fire/issues/detail?id=4_ | 1.0 | I deleted all lyrics, and EOF replaced them with strange text such as "4u^^g^" repeating in the lyric preview in the 3d panel. - _From [xander4j...@yahoo.com](https://code.google.com/u/111302640723734240985/) on May 10, 2010 01:19:40_
I deleted all lyrics, and EOF replaced them with strange text such as
"4u^^g^" repeating in the lyric preview in the 3d panel.
_Original issue: http://code.google.com/p/editor-on-fire/issues/detail?id=4_ | priority | i deleted all lyrics and eof replaced them with strange text such as g repeating in the lyric preview in the panel from on may i deleted all lyrics and eof replaced them with strange text such as g repeating in the lyric preview in the panel original issue | 1 |
796,370 | 28,108,488,622 | IssuesEvent | 2023-03-31 04:17:36 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Renovate not detecting docker image in Kubernetes file if there's a comment behind the image name | type:bug priority-3-medium manager:kubernetes status:ready reproduction:provided | ### How are you running Renovate?
Self-hosted Renovate
### If you're self-hosting Renovate, tell us what version of Renovate you run.
35.24.6
### If you're self-hosting Renovate, select which platform you are using.
Gitlab
### Was this something which used to work for you, and then stopped?
I don't know
### Describe the bug
If there's a trailing comment in a line containing a docker image, renovate won't recognize and update the image tag. For example:
```yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: mattermost/mattermost-team-edition:7.8.1 # trailing comment
```
As soon as the comment is removed, the image will get picked up.
### Relevant debug logs
See comment below
### Have you created a minimal reproduction repository?
See comment below | 1.0 | Renovate not detecting docker image in Kubernetes file if there's a comment behind the image name - ### How are you running Renovate?
Self-hosted Renovate
### If you're self-hosting Renovate, tell us what version of Renovate you run.
35.24.6
### If you're self-hosting Renovate, select which platform you are using.
Gitlab
### Was this something which used to work for you, and then stopped?
I don't know
### Describe the bug
If there's a trailing comment in a line containing a docker image, renovate won't recognize and update the image tag. For example:
```yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: mattermost/mattermost-team-edition:7.8.1 # trailing comment
```
As soon as the comment is removed, the image will get picked up.
### Relevant debug logs
See comment below
### Have you created a minimal reproduction repository?
See comment below | priority | renovate not detecting docker image in kubernetes file if there s a comment behind the image name how are you running renovate self hosted renovate if you re self hosting renovate tell us what version of renovate you run if you re self hosting renovate select which platform you are using gitlab was this something which used to work for you and then stopped i don t know describe the bug if there s a trailing comment in a line containing a docker image renovate won t recognize and update the image tag for example yaml apiversion apps kind deployment spec template spec containers image mattermost mattermost team edition trailing comment as soon as the comment is removed the image will get picked up relevant debug logs see comment below have you created a minimal reproduction repository see comment below | 1 |
650,776 | 21,416,940,008 | IssuesEvent | 2022-04-22 11:52:06 | sahar-avsh/SWE-599 | https://api.github.com/repos/sahar-avsh/SWE-599 | closed | Mindspace - Asking question functionality | enhancement Hard medium priority mindspace UI | There shall be a place to ask question about any **note** and **resource** or even a **mindspace** | 1.0 | Mindspace - Asking question functionality - There shall be a place to ask question about any **note** and **resource** or even a **mindspace** | priority | mindspace asking question functionality there shall be a place to ask question about any note and resource or even a mindspace | 1 |
239,039 | 7,785,999,021 | IssuesEvent | 2018-06-06 17:32:40 | DistrictDataLabs/yellowbrick | https://api.github.com/repos/DistrictDataLabs/yellowbrick | closed | CVScores | priority: medium review type: feature | Implement a visualizer that shows cross-validation scores as a bar chart along with the final score as an annotated horizontal line.
We want to start moving toward better cross-validation and model selection. Create `yb.model_selection.CVScores` visualizer that extends `ModelVisualizer` and wraps an estimator. Accepts a `cv` and `scoring` params, similar to the ones exposed in [`sklearn.model_selection.cross_val_score`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html).
Output is a bar chart with the score of each split and a horizontal dotted annotation with the average score. | 1.0 | CVScores - Implement a visualizer that shows cross-validation scores as a bar chart along with the final score as an annotated horizontal line.
We want to start moving toward better cross-validation and model selection. Create `yb.model_selection.CVScores` visualizer that extends `ModelVisualizer` and wraps an estimator. Accepts a `cv` and `scoring` params, similar to the ones exposed in [`sklearn.model_selection.cross_val_score`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html).
Output is a bar chart with the score of each split and a horizontal dotted annotation with the average score. | priority | cvscores implement a visualizer that shows cross validation scores as a bar chart along with the final score as an annotated horizontal line we want to start moving toward better cross validation and model selection create yb model selection cvscores visualizer that extends modelvisualizer and wraps an estimator accepts a cv and scoring params similar to the ones exposed in output is a bar chart with the score of each split and a horizontal dotted annotation with the average score | 1 |
819,038 | 30,717,524,727 | IssuesEvent | 2023-07-27 13:56:06 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Add Node task fails after Backup/Restores with EAR rotations | kind/bug area/docdb priority/medium 2.18 Backport Required | Jira Link: [DB-7062](https://yugabyte.atlassian.net/browse/DB-7062)
### Description
The Update Universe Task fails after the following operations on a EAR Enabled universe:
Create a 3 RF and 4 node universe
Run Sample Apps
Take Backup
Rotate KMS key
Restore in the same universe by renaming the keyspace
Take backup of the new keyspace
Disable EAR
Restore in the same universe by renaming the keyspace
Take Backup of the new keyspace
Enable EAR
Restore in the same universe by renaming the keyspace
Take Backup of the new keyspace
Restore in the same universe by renaming the keyspace (without any rotations)
Add node (Update Universe) task will fail with the following logs:
```
Failed to execute task {"platformVersion":"2.19.1.0-b189","sleepAfterMasterRestartMillis":180000,"sleepAfterTServerRestartMillis":180000,"nodeExporterUser":"prometheus","universeUUID":"7e756665-30e5-419e-9284-a3ec33a3cd82","enableYbc":false,"installYbc":false,"ybcInstalled":false,"encryptionAtRestConfig":{"encryptionAtRestEnabled":false,"opType":"UNDEFINED","type":"DATA_KEY"},"communicationPorts":{"masterHttpPort":7000,"masterRpcPort":7100,"tserverHttpPort":9000,"tserverRpcPort":9100,"ybControllerHttpPort":14000,"y..., hit error:
WaitForServer(7e756665-30e5-419e-9284-a3ec33a3cd82, yb-dev-ui-auto-aws-21910-b189-kms-114-n5, type=TSERVER) did not respond in the set time..
```
```
W0627 18:17:42.236019 65736 tablet_server.cc:417] Getting full universe key registry from master Leader failed: 'Not found (yb/master/encryption_manager.cc:246): Could not find key with version '. Attempts: 2294, Total Time: 6859638200092ms. Retrying...
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-7062]: https://yugabyte.atlassian.net/browse/DB-7062?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [DocDB] Add Node task fails after Backup/Restores with EAR rotations - Jira Link: [DB-7062](https://yugabyte.atlassian.net/browse/DB-7062)
### Description
The Update Universe Task fails after the following operations on a EAR Enabled universe:
Create a 3 RF and 4 node universe
Run Sample Apps
Take Backup
Rotate KMS key
Restore in the same universe by renaming the keyspace
Take backup of the new keyspace
Disable EAR
Restore in the same universe by renaming the keyspace
Take Backup of the new keyspace
Enable EAR
Restore in the same universe by renaming the keyspace
Take Backup of the new keyspace
Restore in the same universe by renaming the keyspace (without any rotations)
Add node (Update Universe) task will fail with the following logs:
```
Failed to execute task {"platformVersion":"2.19.1.0-b189","sleepAfterMasterRestartMillis":180000,"sleepAfterTServerRestartMillis":180000,"nodeExporterUser":"prometheus","universeUUID":"7e756665-30e5-419e-9284-a3ec33a3cd82","enableYbc":false,"installYbc":false,"ybcInstalled":false,"encryptionAtRestConfig":{"encryptionAtRestEnabled":false,"opType":"UNDEFINED","type":"DATA_KEY"},"communicationPorts":{"masterHttpPort":7000,"masterRpcPort":7100,"tserverHttpPort":9000,"tserverRpcPort":9100,"ybControllerHttpPort":14000,"y..., hit error:
WaitForServer(7e756665-30e5-419e-9284-a3ec33a3cd82, yb-dev-ui-auto-aws-21910-b189-kms-114-n5, type=TSERVER) did not respond in the set time..
```
```
W0627 18:17:42.236019 65736 tablet_server.cc:417] Getting full universe key registry from master Leader failed: 'Not found (yb/master/encryption_manager.cc:246): Could not find key with version '. Attempts: 2294, Total Time: 6859638200092ms. Retrying...
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-7062]: https://yugabyte.atlassian.net/browse/DB-7062?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | add node task fails after backup restores with ear rotations jira link description the update universe task fails after the following operations on a ear enabled universe create a rf and node universe run sample apps take backup rotate kms key restore in the same universe by renaming the keyspace take backup of the new keyspace disable ear restore in the same universe by renaming the keyspace take backup of the new keyspace enable ear restore in the same universe by renaming the keyspace take backup of the new keyspace restore in the same universe by renaming the keyspace without any rotations add node update universe task will fail with the following logs failed to execute task platformversion sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport y hit error waitforserver yb dev ui auto aws kms type tserver did not respond in the set time tablet server cc getting full universe key registry from master leader failed not found yb master encryption manager cc could not find key with version attempts total time retrying warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
599,824 | 18,283,945,848 | IssuesEvent | 2021-10-05 08:12:22 | lea927/drop-that-beat | https://api.github.com/repos/lea927/drop-that-beat | closed | As a Player, I want to know if I got the correct answer. | Priority: Medium Type: Feature State: Ongoing Review | ## User Story
As a Player, I want to know if I got the correct answer.
## Acceptance criteria
- [x] Given that I'm playing the game, when I click a choice, the system generates a message that the answer is correct.
- [x] Given that I'm playing the game, when I click the correct choice, the button changes color from blue to green.
- [x] Given that I'm playing the game, when I click an incorrect choice, the button changes color from blue to red.
| 1.0 | As a Player, I want to know if I got the correct answer. - ## User Story
As a Player, I want to know if I got the correct answer.
## Acceptance criteria
- [x] Given that I'm playing the game, when I click a choice, the system generates a message that the answer is correct.
- [x] Given that I'm playing the game, when I click the correct choice, the button changes color from blue to green.
- [x] Given that I'm playing the game, when I click an incorrect choice, the button changes color from blue to red.
| priority | as a player i want to know if i got the correct answer user story as a player i want to know if i got the correct answer acceptance criteria given that i m playing the game when i click a choice the system generates a message that the answer is correct given that i m playing the game when i click the correct choice the button changes color from blue to green given that i m playing the game when i click an incorrect choice the button changes color from blue to red | 1 |
212,867 | 7,243,488,546 | IssuesEvent | 2018-02-14 11:55:26 | bounswe/bounswe2018group2 | https://api.github.com/repos/bounswe/bounswe2018group2 | closed | Create a wiki page that describes yourself and update your links on both README.md and sidebar | good first issue medium priority | After you have created a new page and filled that page with your personal information, you can update your links on README.md by clicking the file and clicking edit (something like this: ✏️ 😄 ) and you can update the sidebar links on the wiki by clicking the same icon on the sidebar. | 1.0 | Create a wiki page that describes yourself and update your links on both README.md and sidebar - After you have created a new page and filled that page with your personal information, you can update your links on README.md by clicking the file and clicking edit (something like this: ✏️ 😄 ) and you can update the sidebar links on the wiki by clicking the same icon on the sidebar. | priority | create a wiki page that describes yourself and update your links on both readme md and sidebar after you have created a new page and filled that page with your personal information you can update your links on readme md by clicking the file and clicking edit something like this ✏️ 😄 and you can update the sidebar links on the wiki by clicking the same icon on the sidebar | 1 |
222,904 | 7,440,707,123 | IssuesEvent | 2018-03-27 11:00:35 | AnSyn/ansyn | https://api.github.com/repos/AnSyn/ansyn | opened | Filters menu - resolution slider | Bug Priority: Medium | 1. when menu has scroll the slider is partially visible

| 1.0 | Filters menu - resolution slider - 1. when menu has scroll the slider is partially visible

| priority | filters menu resolution slider when menu has scroll the slider is partially visible | 1 |
462,580 | 13,249,519,033 | IssuesEvent | 2020-08-19 20:57:31 | phetsims/scenery-phet | https://api.github.com/repos/phetsims/scenery-phet | closed | Change StepButton constructor? | priority:3-medium | Related to https://github.com/phetsims/wave-interference/issues/342.
Compare `PlayButton` and `StepButton` constructor APIs:
```js
function PlayPauseButton( isPlayingProperty, options )
function StepButton( options ) {
options = _.extend( {
...
// {Property.<boolean>|null} is the sim playing? This is a convenience option.
// If this Property is provided, it will disable the button while the sim is playing,
// and you should avoid using the button's native 'enabled' property.
isPlayingProperty: null,
...
}, options );
}
```
Why are they different? Should `isPlayingProperty` be a required parameter for `StepButton`?
The only clients now are wave-interference and gas-properties, so this would be a good time to change.
@samreid your opinion? | 1.0 | Change StepButton constructor? - Related to https://github.com/phetsims/wave-interference/issues/342.
Compare `PlayButton` and `StepButton` constructor APIs:
```js
function PlayPauseButton( isPlayingProperty, options )
function StepButton( options ) {
options = _.extend( {
...
// {Property.<boolean>|null} is the sim playing? This is a convenience option.
// If this Property is provided, it will disable the button while the sim is playing,
// and you should avoid using the button's native 'enabled' property.
isPlayingProperty: null,
...
}, options );
}
```
Why are they different? Should `isPlayingProperty` be a required parameter for `StepButton`?
The only clients now are wave-interference and gas-properties, so this would be a good time to change.
@samreid your opinion? | priority | change stepbutton constructor related to compare playbutton and stepbutton constructor apis js function playpausebutton isplayingproperty options function stepbutton options options extend property null is the sim playing this is a convenience option if this property is provided it will disable the button while the sim is playing and you should avoid using the button s native enabled property isplayingproperty null options why are they different should isplayingproperty be a required parameter for stepbutton the only clients now are wave interference and gas properties so this would be a good time to change samreid your opinion | 1 |
799,563 | 28,309,562,201 | IssuesEvent | 2023-04-10 14:15:28 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Editing any item in the TreeList InCell will add the k-dirty indicator to its top-left data cell | Bug SEV: Medium C: Gantt C: TreeList jQuery Priority 5 | ### Bug report
Editing any item in the TreeList InCell will add the k-dirty indicator to its top-left data cell
**Regression introduced with R1 2023**
### Reproduction of the problem
1. Open the TreeList InCell editing demo - https://demos.telerik.com/kendo-ui/treelist/editing-incell
2. Edit any cell in the TreeList
3. The `k-dirty` indicator will appear on its top-left data cell
### Current behavior
The top-left data cell is marked as dirty
### Expected/desired behavior
The top-left data cell shouldn't be marked as dirty
### Environment
* **Kendo UI version:** 2023.1.314
* **Browser:** [all] | 1.0 | Editing any item in the TreeList InCell will add the k-dirty indicator to its top-left data cell - ### Bug report
Editing any item in the TreeList InCell will add the k-dirty indicator to its top-left data cell
**Regression introduced with R1 2023**
### Reproduction of the problem
1. Open the TreeList InCell editing demo - https://demos.telerik.com/kendo-ui/treelist/editing-incell
2. Edit any cell in the TreeList
3. The `k-dirty` indicator will appear on its top-left data cell
### Current behavior
The top-left data cell is marked as dirty
### Expected/desired behavior
The top-left data cell shouldn't be marked as dirty
### Environment
* **Kendo UI version:** 2023.1.314
* **Browser:** [all] | priority | editing any item in the treelist incell will add the k dirty indicator to its top left data cell bug report editing any item in the treelist incell will add the k dirty indicator to its top left data cell regression introduced with reproduction of the problem open the treelist incell editing demo edit any cell in the treelist the k dirty indicator will appear on its top left data cell current behavior the top left data cell is marked as dirty expected desired behavior the top left data cell shouldn t be marked as dirty environment kendo ui version browser | 1 |
734,483 | 25,351,053,721 | IssuesEvent | 2022-11-19 19:31:42 | bounswe/bounswe2022group4 | https://api.github.com/repos/bounswe/bounswe2022group4 | opened | Backend: Modification of the authentication mechanism. | Category - To Do Priority - Medium Language - Python Team - Backend | **Description**:
The backend team decided to migrate from JWT Authentication to the token authentication mechanism provided by the Django REST framework.
**Tasks:**
- Implement the token authentication.
**Deadline: 22/11/2022, 22.00 (GMT+3)** | 1.0 | Backend: Modification of the authentication mechanism. - **Description**:
The backend team decided to migrate from JWT Authentication to the token authentication mechanism provided by the Django REST framework.
**Tasks:**
- Implement the token authentication.
**Deadline: 22/11/2022, 22.00 (GMT+3)** | priority | backend modification of the authentication mechanism description the backend team decided to migrate from jwt authentication to the token authentication mechanism provided by the django rest framework tasks implement the token authentication deadline gmt | 1 |
830,056 | 31,986,917,372 | IssuesEvent | 2023-09-21 00:37:33 | LBL-EESA/TECA | https://api.github.com/repos/LBL-EESA/TECA | opened | add temporal index select stage to the cf_restripe | feature 2_medium_priority | add the teca_temporal_index_select stage, which takes a list of indices that will be presented to the system to be processed.
currently the index_select gets the list indices externally. there';s some ad hoc code in the temporal_reduction app to load the list of indices, the table reader could potentially be used (the list of indices is a table with 1 column), this i/o code could be packaged into the class itself, or a utility that is shared by the two apps. | 1.0 | add temporal index select stage to the cf_restripe - add the teca_temporal_index_select stage, which takes a list of indices that will be presented to the system to be processed.
currently the index_select gets the list indices externally. there';s some ad hoc code in the temporal_reduction app to load the list of indices, the table reader could potentially be used (the list of indices is a table with 1 column), this i/o code could be packaged into the class itself, or a utility that is shared by the two apps. | priority | add temporal index select stage to the cf restripe add the teca temporal index select stage which takes a list of indices that will be presented to the system to be processed currently the index select gets the list indices externally there s some ad hoc code in the temporal reduction app to load the list of indices the table reader could potentially be used the list of indices is a table with column this i o code could be packaged into the class itself or a utility that is shared by the two apps | 1 |
538,637 | 15,774,378,669 | IssuesEvent | 2021-04-01 00:57:34 | sonia-auv/octopus-telemetry | https://api.github.com/repos/sonia-auv/octopus-telemetry | closed | Fix Dockerfile | Priority: Medium Type: Bug | ## Expected Behavior
We expect to be able to build the project.
## Current Behavior
The `docker build...` step fails because of a missing `package-lock.json`.
## Possible Solution
We can remove the `package-lock.json` - not necessary for the Docker instance.
## Comments
We could test on as most Docker versions possible.
## Environment Used
- docker version : `Docker version 20.10.5, build 55c4c88` | 1.0 | Fix Dockerfile - ## Expected Behavior
We expect to be able to build the project.
## Current Behavior
The `docker build...` step fails because of a missing `package-lock.json`.
## Possible Solution
We can remove the `package-lock.json` - not necessary for the Docker instance.
## Comments
We could test on as most Docker versions possible.
## Environment Used
- docker version : `Docker version 20.10.5, build 55c4c88` | priority | fix dockerfile expected behavior we expect to be able to build the project current behavior the docker build step fails because of a missing package lock json possible solution we can remove the package lock json not necessary for the docker instance comments we could test on as most docker versions possible environment used docker version docker version build | 1 |
78,191 | 3,509,507,843 | IssuesEvent | 2016-01-08 23:08:29 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Quest: [Portals of the Legion] (BB #960) | Category: Quests migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** GazelleMag
**Original Date:** 03.06.2015 13:06:07 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/960
<hr>
When you banish the portals they aren't being counted as "banished". The quest objectives stays at 0/6. | 1.0 | Quest: [Portals of the Legion] (BB #960) - This issue was migrated from bitbucket.
**Original Reporter:** GazelleMag
**Original Date:** 03.06.2015 13:06:07 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/960
<hr>
When you banish the portals they aren't being counted as "banished". The quest objectives stays at 0/6. | priority | quest bb this issue was migrated from bitbucket original reporter gazellemag original date gmt original priority major original type bug original state resolved direct link when you banish the portals they aren t being counted as banished the quest objectives stays at | 1 |
454,170 | 13,095,914,813 | IssuesEvent | 2020-08-03 14:51:56 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | ycb objects missing textures in drake visualizer | priority: medium team: robot locomotion group type: bug | Running ` bazel run //examples/manipulation_station:end_effector_teleop_sliders -- --setup=clutter_clearing` currently results in textureless YCB objects.

They used to have textures (and continue to have texture in meshcat). I hope it's not just something stale locally. @SeanCurtis-TRI -- perhaps you can reproduce?
| 1.0 | ycb objects missing textures in drake visualizer - Running ` bazel run //examples/manipulation_station:end_effector_teleop_sliders -- --setup=clutter_clearing` currently results in textureless YCB objects.

They used to have textures (and continue to have texture in meshcat). I hope it's not just something stale locally. @SeanCurtis-TRI -- perhaps you can reproduce?
| priority | ycb objects missing textures in drake visualizer running bazel run examples manipulation station end effector teleop sliders setup clutter clearing currently results in textureless ycb objects they used to have textures and continue to have texture in meshcat i hope it s not just something stale locally seancurtis tri perhaps you can reproduce | 1 |
783,167 | 27,520,948,311 | IssuesEvent | 2023-03-06 15:00:38 | telabotanica/pollinisateurs | https://api.github.com/repos/telabotanica/pollinisateurs | closed | Changer font page discussion d'un groupe | priority::medium | La font ici https://staging.nospollinisateurs.fr/groups/communaute/discussions qui est en grey light "actif il y a x", "2 messages, "véro schäfer y'a 2 jours" (mis dans #68 pour parler de sa couleur)
devrait être du helvetica et pas du century gothic
| 1.0 | Changer font page discussion d'un groupe - La font ici https://staging.nospollinisateurs.fr/groups/communaute/discussions qui est en grey light "actif il y a x", "2 messages, "véro schäfer y'a 2 jours" (mis dans #68 pour parler de sa couleur)
devrait être du helvetica et pas du century gothic
| priority | changer font page discussion d un groupe la font ici qui est en grey light actif il y a x messages véro schäfer y a jours mis dans pour parler de sa couleur devrait être du helvetica et pas du century gothic | 1 |
57,772 | 3,083,773,966 | IssuesEvent | 2015-08-24 11:13:10 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Звук скачивания файлов не должен звучать при скачивании списка файлов и запросе IP. | bug Component-UI imported Priority-Medium | _From [tret2...@gmail.com](https://code.google.com/u/116508191076211387118/) on July 17, 2013 06:43:44_
Звук скачивания файлов не должен звучать при скачивании списка файлов, что при запросе-IP всех пользователей [на всех хабах] - хорошо наблюдается...
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1088_ | 1.0 | Звук скачивания файлов не должен звучать при скачивании списка файлов и запросе IP. - _From [tret2...@gmail.com](https://code.google.com/u/116508191076211387118/) on July 17, 2013 06:43:44_
Звук скачивания файлов не должен звучать при скачивании списка файлов, что при запросе-IP всех пользователей [на всех хабах] - хорошо наблюдается...
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1088_ | priority | звук скачивания файлов не должен звучать при скачивании списка файлов и запросе ip from on july звук скачивания файлов не должен звучать при скачивании списка файлов что при запросе ip всех пользователей хорошо наблюдается original issue | 1 |
427,449 | 12,395,429,582 | IssuesEvent | 2020-05-20 18:38:52 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | opened | Add ability for VLJs to edit attorney comments/feedback | Priority: Medium Product: caseflow-queue Team: Echo 🐬 Type: New Development | <!-- The goal of this template is to be a tool to communicate the requirements for a story related task. It is not intended as a mandate, adapt as needed. -->
## User or job story
User story: As a VLJ, I need the ability to view and edit attorney feedback/comments after judge checkout and/or dispatch, so that reporting can be as accurate as possible.
## Acceptance criteria
- [ ] VLJs are able to view the attorney feedback or comments after they have checked out their decision
- [ ] VLJs are able to edit attorney feedback or comments after they have checked out their decision
- [ ] VLJs are able to edit attorney feedback or comments after the case has been dispatched
- [ ] Include screenshot(s) in the Github issue if there are front-end changes
## Release notes
VLJs are now able to view and edit attorney feedback after the decision review task has been completed.
<!-- The following sections can be deleted if they are not needed -->
### Designs
<!-- Include screenshots or links to designs if applicable. -->
### Background/context
Per the Board, this is functionality that existed in VACOLS/DAS and as reporting becomes more disseminated to attorneys it would be good functionality to include into Caseflow as well.
### Technical notes
<!-- Include notes that might help an engineer get started on this more quickly, or potential pitfalls to watch out for. -->
| 1.0 | Add ability for VLJs to edit attorney comments/feedback - <!-- The goal of this template is to be a tool to communicate the requirements for a story related task. It is not intended as a mandate, adapt as needed. -->
## User or job story
User story: As a VLJ, I need the ability to view and edit attorney feedback/comments after judge checkout and/or dispatch, so that reporting can be as accurate as possible.
## Acceptance criteria
- [ ] VLJs are able to view the attorney feedback or comments after they have checked out their decision
- [ ] VLJs are able to edit attorney feedback or comments after they have checked out their decision
- [ ] VLJs are able to edit attorney feedback or comments after the case has been dispatched
- [ ] Include screenshot(s) in the Github issue if there are front-end changes
## Release notes
VLJs are now able to view and edit attorney feedback after the decision review task has been completed.
<!-- The following sections can be deleted if they are not needed -->
### Designs
<!-- Include screenshots or links to designs if applicable. -->
### Background/context
Per the Board, this is functionality that existed in VACOLS/DAS and as reporting becomes more disseminated to attorneys it would be good functionality to include into Caseflow as well.
### Technical notes
<!-- Include notes that might help an engineer get started on this more quickly, or potential pitfalls to watch out for. -->
| priority | add ability for vljs to edit attorney comments feedback user or job story user story as a vlj i need the ability to view and edit attorney feedback comments after judge checkout and or dispatch so that reporting can be as accurate as possible acceptance criteria vljs are able to view the attorney feedback or comments after they have checked out their decision vljs are able to edit attorney feedback or comments after they have checked out their decision vljs are able to edit attorney feedback or comments after the case has been dispatched include screenshot s in the github issue if there are front end changes release notes vljs are now able to view and edit attorney feedback after the decision review task has been completed designs background context per the board this is functionality that existed in vacols das and as reporting becomes more disseminated to attorneys it would be good functionality to include into caseflow as well technical notes | 1 |
637,340 | 20,625,762,167 | IssuesEvent | 2022-03-07 22:20:28 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | lvgl: upgrade LVGL to 8.1 build error | bug priority: medium | **Describe the bug**
`[60/413] Building C object modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj
FAILED: modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj
C:\gnu_arm_embedded\bin\arm-none-eabi-gcc.exe -DKERNEL -DLV_CONF_INCLUDE_SIMPLE=1 -DLV_CONF_PATH=zephyr/lv_conf.h -DNRF52832_XXAA -D_FORTIFY_SOURCE=2 -D__LINUX_ERRNO_EXTENSIONS__ -D__PROGRAM_START -D__ZEPHYR__=1 -ID:/personal/pinetime/zephyr/lib/gui/lvgl -ID:/personal/pinetime/zephyr/include -Izephyr/include/generated -ID:/personal/pinetime/zephyr/soc/arm/nordic_nrf/nrf52 -ID:/personal/pinetime/zephyr/lib/libc/newlib/include -ID:/personal/pinetime/zephyr/soc/arm/nordic_nrf/common/. -ID:/personal/pinetime/zephyr/subsys/bluetooth -ID:/personal/pinetime/zephyr/subsys/settings/include -ID:/personal/pinetime/modules/hal/cmsis/CMSIS/Core/Include -ID:/personal/pinetime/modules/hal/nordic/nrfx -ID:/personal/pinetime/modules/hal/nordic/nrfx/drivers/include -ID:/personal/pinetime/modules/hal/nordic/nrfx/mdk -ID:/personal/pinetime/zephyr/modules/hal_nordic/nrfx/. -ID:/personal/pinetime/modules/lib/gui/lvgl/zephyr/.. -ID:/personal/pinetime/modules/debug/segger/SEGGER -ID:/personal/pinetime/modules/debug/segger/Config -ID:/personal/pinetime/zephyr/modules/segger/. -ID:/personal/pinetime/modules/crypto/tinycrypt/lib/include -ID:/personal/pinetime/pinetime/app/src -ID:/personal/pinetime/pinetime/app/src/bluetooth/service -ID:/personal/pinetime/pinetime/app/src/apps -ID:/personal/pinetime/pinetime/app/src/event/. -Os -imacros D:/personal/pinetime/pinetime/build/zephyr/include/generated/autoconf.h -ffreestanding -fno-common -g -gdwarf-4 -fdiagnostics-color=always -mcpu=cortex-m4 -mthumb -mabi=aapcs -mfp16-format=ieee -imacros D:/personal/pinetime/zephyr/include/toolchain/zephyr_stdint.h -Wall -Wformat -Wformat-security -Wno-format-zero-length -Wno-main -Wno-pointer-sign -Wpointer-arith -Wexpansion-to-defined -Wno-unused-but-set-variable -Werror=implicit-int -fno-asynchronous-unwind-tables -fno-pie -fno-pic -fno-reorder-functions -fno-defer-pop -fmacro-prefix-map=D:/personal/pinetime/pinetime/app=CMAKE_SOURCE_DIR -fmacro-prefix-map=D:/personal/pinetime/zephyr=ZEPHYR_BASE -fmacro-prefix-map=D:/personal/pinetime=WEST_TOPDIR -ffunction-sections -fdata-sections -specs=nano.specs -std=c99 -MD -MT modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj -MF modules\lvgl\CMakeFiles\..__modules__lib__gui__lvgl__zephyr.dir\D_\personal\pinetime\modules\lib\gui\lvgl\src\extra\widgets\win\lv_win.c.obj.d -o modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj -c D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c
In file included from d:\personal\pinetime\modules\lib\gui\lvgl\lvgl.h:54,
from d:\personal\pinetime\modules\lib\gui\lvgl\src\lvgl.h:17,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.h:16,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c:9:
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:22:2: error: #error "lv_slider: lv_bar is required. Enable it in lv_conf.h (LV_USE_BAR 1)"
22 | #error "lv_slider: lv_bar is required. Enable it in lv_conf.h (LV_USE_BAR 1)"
| ^~~~~
In file included from d:\personal\pinetime\modules\lib\gui\lvgl\lvgl.h:54,
from d:\personal\pinetime\modules\lib\gui\lvgl\src\lvgl.h:17,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.h:16,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c:9:
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:37:29: error: 'LV_BAR_MODE_NORMAL' undeclared here (not in a function); did you mean 'LV_BLEND_MODE_NORMAL'?
37 | LV_SLIDER_MODE_NORMAL = LV_BAR_MODE_NORMAL,
| ^~~~~~~~~~~~~~~~~~
| LV_BLEND_MODE_NORMAL
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:38:34: error: 'LV_BAR_MODE_SYMMETRICAL' undeclared here (not in a function)
38 | LV_SLIDER_MODE_SYMMETRICAL = LV_BAR_MODE_SYMMETRICAL,
| ^~~~~~~~~~~~~~~~~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:39:28: error: 'LV_BAR_MODE_RANGE' undeclared here (not in a function)
39 | LV_SLIDER_MODE_RANGE = LV_BAR_MODE_RANGE
| ^~~~~~~~~~~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:44:5: error: unknown type name 'lv_bar_t'
44 | lv_bar_t bar; /*Add the ancestor's type first*/
| ^~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_value':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:86:5: warning: implicit declaration of function 'lv_bar_set_value'; did you mean 'lv_slider_set_value'? [-Wimplicit-function-declaration]
86 | lv_bar_set_value(obj, value, anim);
| ^~~~~~~~~~~~~~~~
| lv_slider_set_value
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_left_value':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:97:5: warning: implicit declaration of function 'lv_bar_set_start_value' [-Wimplicit-function-declaration]
97 | lv_bar_set_start_value(obj, value, anim);
| ^~~~~~~~~~~~~~~~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_range':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:108:5: warning: implicit declaration of function 'lv_bar_set_range'; did you mean 'lv_slider_set_range'? [-Wimplicit-function-declaration]
108 | lv_bar_set_range(obj, min, max);
| ^~~~~~~~~~~~~~~~
| lv_slider_set_range
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_mode':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:5: warning: implicit declaration of function 'lv_bar_set_mode'; did you mean 'lv_slider_set_mode'? [-Wimplicit-function-declaration]
118 | lv_bar_set_mode(obj, (lv_bar_mode_t)mode);
| ^~~~~~~~~~~~~~~
| lv_slider_set_mode
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:27: error: 'lv_bar_mode_t' undeclared (first use in this function); did you mean 'lv_fs_mode_t'?
118 | lv_bar_set_mode(obj, (lv_bar_mode_t)mode);
| ^~~~~~~~~~~~~
| lv_fs_mode_t
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:27: note: each undeclared identifier is reported only once for each function it appears in
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:41: error: expected ')' before 'mode'
118 | lv_bar_set_mode(obj, (lv_bar_mode_t)mode);
| ^~~~
| )
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_get_value':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:132:12: warning: implicit declaration of function 'lv_bar_get_value'; did you mean 'lv_slider_get_value'? [-Wimplicit-function-declara
`
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
pinetime_devkit0
- What have you tried to diagnose or workaround this issue?
find out the Kconfig maybe not correct
**To Reproduce**
Steps to reproduce the behavior:
1. enable CONFIG_LVGL=y
CONFIG_LV_CONF_MINIMAL=y
build and see error
**Expected behavior**
**Impact**
**Logs and console output**
**Environment (please complete the following information):**
**Additional context**
| 1.0 | lvgl: upgrade LVGL to 8.1 build error - **Describe the bug**
`[60/413] Building C object modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj
FAILED: modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj
C:\gnu_arm_embedded\bin\arm-none-eabi-gcc.exe -DKERNEL -DLV_CONF_INCLUDE_SIMPLE=1 -DLV_CONF_PATH=zephyr/lv_conf.h -DNRF52832_XXAA -D_FORTIFY_SOURCE=2 -D__LINUX_ERRNO_EXTENSIONS__ -D__PROGRAM_START -D__ZEPHYR__=1 -ID:/personal/pinetime/zephyr/lib/gui/lvgl -ID:/personal/pinetime/zephyr/include -Izephyr/include/generated -ID:/personal/pinetime/zephyr/soc/arm/nordic_nrf/nrf52 -ID:/personal/pinetime/zephyr/lib/libc/newlib/include -ID:/personal/pinetime/zephyr/soc/arm/nordic_nrf/common/. -ID:/personal/pinetime/zephyr/subsys/bluetooth -ID:/personal/pinetime/zephyr/subsys/settings/include -ID:/personal/pinetime/modules/hal/cmsis/CMSIS/Core/Include -ID:/personal/pinetime/modules/hal/nordic/nrfx -ID:/personal/pinetime/modules/hal/nordic/nrfx/drivers/include -ID:/personal/pinetime/modules/hal/nordic/nrfx/mdk -ID:/personal/pinetime/zephyr/modules/hal_nordic/nrfx/. -ID:/personal/pinetime/modules/lib/gui/lvgl/zephyr/.. -ID:/personal/pinetime/modules/debug/segger/SEGGER -ID:/personal/pinetime/modules/debug/segger/Config -ID:/personal/pinetime/zephyr/modules/segger/. -ID:/personal/pinetime/modules/crypto/tinycrypt/lib/include -ID:/personal/pinetime/pinetime/app/src -ID:/personal/pinetime/pinetime/app/src/bluetooth/service -ID:/personal/pinetime/pinetime/app/src/apps -ID:/personal/pinetime/pinetime/app/src/event/. -Os -imacros D:/personal/pinetime/pinetime/build/zephyr/include/generated/autoconf.h -ffreestanding -fno-common -g -gdwarf-4 -fdiagnostics-color=always -mcpu=cortex-m4 -mthumb -mabi=aapcs -mfp16-format=ieee -imacros D:/personal/pinetime/zephyr/include/toolchain/zephyr_stdint.h -Wall -Wformat -Wformat-security -Wno-format-zero-length -Wno-main -Wno-pointer-sign -Wpointer-arith -Wexpansion-to-defined -Wno-unused-but-set-variable -Werror=implicit-int -fno-asynchronous-unwind-tables -fno-pie -fno-pic -fno-reorder-functions -fno-defer-pop -fmacro-prefix-map=D:/personal/pinetime/pinetime/app=CMAKE_SOURCE_DIR -fmacro-prefix-map=D:/personal/pinetime/zephyr=ZEPHYR_BASE -fmacro-prefix-map=D:/personal/pinetime=WEST_TOPDIR -ffunction-sections -fdata-sections -specs=nano.specs -std=c99 -MD -MT modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj -MF modules\lvgl\CMakeFiles\..__modules__lib__gui__lvgl__zephyr.dir\D_\personal\pinetime\modules\lib\gui\lvgl\src\extra\widgets\win\lv_win.c.obj.d -o modules/lvgl/CMakeFiles/..__modules__lib__gui__lvgl__zephyr.dir/D_/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c.obj -c D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c
In file included from d:\personal\pinetime\modules\lib\gui\lvgl\lvgl.h:54,
from d:\personal\pinetime\modules\lib\gui\lvgl\src\lvgl.h:17,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.h:16,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c:9:
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:22:2: error: #error "lv_slider: lv_bar is required. Enable it in lv_conf.h (LV_USE_BAR 1)"
22 | #error "lv_slider: lv_bar is required. Enable it in lv_conf.h (LV_USE_BAR 1)"
| ^~~~~
In file included from d:\personal\pinetime\modules\lib\gui\lvgl\lvgl.h:54,
from d:\personal\pinetime\modules\lib\gui\lvgl\src\lvgl.h:17,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.h:16,
from D:/personal/pinetime/modules/lib/gui/lvgl/src/extra/widgets/win/lv_win.c:9:
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:37:29: error: 'LV_BAR_MODE_NORMAL' undeclared here (not in a function); did you mean 'LV_BLEND_MODE_NORMAL'?
37 | LV_SLIDER_MODE_NORMAL = LV_BAR_MODE_NORMAL,
| ^~~~~~~~~~~~~~~~~~
| LV_BLEND_MODE_NORMAL
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:38:34: error: 'LV_BAR_MODE_SYMMETRICAL' undeclared here (not in a function)
38 | LV_SLIDER_MODE_SYMMETRICAL = LV_BAR_MODE_SYMMETRICAL,
| ^~~~~~~~~~~~~~~~~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:39:28: error: 'LV_BAR_MODE_RANGE' undeclared here (not in a function)
39 | LV_SLIDER_MODE_RANGE = LV_BAR_MODE_RANGE
| ^~~~~~~~~~~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:44:5: error: unknown type name 'lv_bar_t'
44 | lv_bar_t bar; /*Add the ancestor's type first*/
| ^~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_value':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:86:5: warning: implicit declaration of function 'lv_bar_set_value'; did you mean 'lv_slider_set_value'? [-Wimplicit-function-declaration]
86 | lv_bar_set_value(obj, value, anim);
| ^~~~~~~~~~~~~~~~
| lv_slider_set_value
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_left_value':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:97:5: warning: implicit declaration of function 'lv_bar_set_start_value' [-Wimplicit-function-declaration]
97 | lv_bar_set_start_value(obj, value, anim);
| ^~~~~~~~~~~~~~~~~~~~~~
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_range':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:108:5: warning: implicit declaration of function 'lv_bar_set_range'; did you mean 'lv_slider_set_range'? [-Wimplicit-function-declaration]
108 | lv_bar_set_range(obj, min, max);
| ^~~~~~~~~~~~~~~~
| lv_slider_set_range
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_set_mode':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:5: warning: implicit declaration of function 'lv_bar_set_mode'; did you mean 'lv_slider_set_mode'? [-Wimplicit-function-declaration]
118 | lv_bar_set_mode(obj, (lv_bar_mode_t)mode);
| ^~~~~~~~~~~~~~~
| lv_slider_set_mode
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:27: error: 'lv_bar_mode_t' undeclared (first use in this function); did you mean 'lv_fs_mode_t'?
118 | lv_bar_set_mode(obj, (lv_bar_mode_t)mode);
| ^~~~~~~~~~~~~
| lv_fs_mode_t
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:27: note: each undeclared identifier is reported only once for each function it appears in
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:118:41: error: expected ')' before 'mode'
118 | lv_bar_set_mode(obj, (lv_bar_mode_t)mode);
| ^~~~
| )
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h: In function 'lv_slider_get_value':
d:\personal\pinetime\modules\lib\gui\lvgl\src/widgets/lv_slider.h:132:12: warning: implicit declaration of function 'lv_bar_get_value'; did you mean 'lv_slider_get_value'? [-Wimplicit-function-declara
`
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
pinetime_devkit0
- What have you tried to diagnose or workaround this issue?
find out the Kconfig maybe not correct
**To Reproduce**
Steps to reproduce the behavior:
1. enable CONFIG_LVGL=y
CONFIG_LV_CONF_MINIMAL=y
build and see error
**Expected behavior**
**Impact**
**Logs and console output**
**Environment (please complete the following information):**
**Additional context**
| priority | lvgl upgrade lvgl to build error describe the bug building c object modules lvgl cmakefiles modules lib gui lvgl zephyr dir d personal pinetime modules lib gui lvgl src extra widgets win lv win c obj failed modules lvgl cmakefiles modules lib gui lvgl zephyr dir d personal pinetime modules lib gui lvgl src extra widgets win lv win c obj c gnu arm embedded bin arm none eabi gcc exe dkernel dlv conf include simple dlv conf path zephyr lv conf h xxaa d fortify source d linux errno extensions d program start d zephyr id personal pinetime zephyr lib gui lvgl id personal pinetime zephyr include izephyr include generated id personal pinetime zephyr soc arm nordic nrf id personal pinetime zephyr lib libc newlib include id personal pinetime zephyr soc arm nordic nrf common id personal pinetime zephyr subsys bluetooth id personal pinetime zephyr subsys settings include id personal pinetime modules hal cmsis cmsis core include id personal pinetime modules hal nordic nrfx id personal pinetime modules hal nordic nrfx drivers include id personal pinetime modules hal nordic nrfx mdk id personal pinetime zephyr modules hal nordic nrfx id personal pinetime modules lib gui lvgl zephyr id personal pinetime modules debug segger segger id personal pinetime modules debug segger config id personal pinetime zephyr modules segger id personal pinetime modules crypto tinycrypt lib include id personal pinetime pinetime app src id personal pinetime pinetime app src bluetooth service id personal pinetime pinetime app src apps id personal pinetime pinetime app src event os imacros d personal pinetime pinetime build zephyr include generated autoconf h ffreestanding fno common g gdwarf fdiagnostics color always mcpu cortex mthumb mabi aapcs format ieee imacros d personal pinetime zephyr include toolchain zephyr stdint h wall wformat wformat security wno format zero length wno main wno pointer sign wpointer arith wexpansion to defined wno unused but set variable werror implicit int fno asynchronous unwind tables fno pie fno pic fno reorder functions fno defer pop fmacro prefix map d personal pinetime pinetime app cmake source dir fmacro prefix map d personal pinetime zephyr zephyr base fmacro prefix map d personal pinetime west topdir ffunction sections fdata sections specs nano specs std md mt modules lvgl cmakefiles modules lib gui lvgl zephyr dir d personal pinetime modules lib gui lvgl src extra widgets win lv win c obj mf modules lvgl cmakefiles modules lib gui lvgl zephyr dir d personal pinetime modules lib gui lvgl src extra widgets win lv win c obj d o modules lvgl cmakefiles modules lib gui lvgl zephyr dir d personal pinetime modules lib gui lvgl src extra widgets win lv win c obj c d personal pinetime modules lib gui lvgl src extra widgets win lv win c in file included from d personal pinetime modules lib gui lvgl lvgl h from d personal pinetime modules lib gui lvgl src lvgl h from d personal pinetime modules lib gui lvgl src extra widgets win lv win h from d personal pinetime modules lib gui lvgl src extra widgets win lv win c d personal pinetime modules lib gui lvgl src widgets lv slider h error error lv slider lv bar is required enable it in lv conf h lv use bar error lv slider lv bar is required enable it in lv conf h lv use bar in file included from d personal pinetime modules lib gui lvgl lvgl h from d personal pinetime modules lib gui lvgl src lvgl h from d personal pinetime modules lib gui lvgl src extra widgets win lv win h from d personal pinetime modules lib gui lvgl src extra widgets win lv win c d personal pinetime modules lib gui lvgl src widgets lv slider h error lv bar mode normal undeclared here not in a function did you mean lv blend mode normal lv slider mode normal lv bar mode normal lv blend mode normal d personal pinetime modules lib gui lvgl src widgets lv slider h error lv bar mode symmetrical undeclared here not in a function lv slider mode symmetrical lv bar mode symmetrical d personal pinetime modules lib gui lvgl src widgets lv slider h error lv bar mode range undeclared here not in a function lv slider mode range lv bar mode range d personal pinetime modules lib gui lvgl src widgets lv slider h error unknown type name lv bar t lv bar t bar add the ancestor s type first d personal pinetime modules lib gui lvgl src widgets lv slider h in function lv slider set value d personal pinetime modules lib gui lvgl src widgets lv slider h warning implicit declaration of function lv bar set value did you mean lv slider set value lv bar set value obj value anim lv slider set value d personal pinetime modules lib gui lvgl src widgets lv slider h in function lv slider set left value d personal pinetime modules lib gui lvgl src widgets lv slider h warning implicit declaration of function lv bar set start value lv bar set start value obj value anim d personal pinetime modules lib gui lvgl src widgets lv slider h in function lv slider set range d personal pinetime modules lib gui lvgl src widgets lv slider h warning implicit declaration of function lv bar set range did you mean lv slider set range lv bar set range obj min max lv slider set range d personal pinetime modules lib gui lvgl src widgets lv slider h in function lv slider set mode d personal pinetime modules lib gui lvgl src widgets lv slider h warning implicit declaration of function lv bar set mode did you mean lv slider set mode lv bar set mode obj lv bar mode t mode lv slider set mode d personal pinetime modules lib gui lvgl src widgets lv slider h error lv bar mode t undeclared first use in this function did you mean lv fs mode t lv bar set mode obj lv bar mode t mode lv fs mode t d personal pinetime modules lib gui lvgl src widgets lv slider h note each undeclared identifier is reported only once for each function it appears in d personal pinetime modules lib gui lvgl src widgets lv slider h error expected before mode lv bar set mode obj lv bar mode t mode d personal pinetime modules lib gui lvgl src widgets lv slider h in function lv slider get value d personal pinetime modules lib gui lvgl src widgets lv slider h warning implicit declaration of function lv bar get value did you mean lv slider get value wimplicit function declara please also mention any information which could help others to understand the problem you re facing what target platform are you using pinetime what have you tried to diagnose or workaround this issue find out the kconfig maybe not correct to reproduce steps to reproduce the behavior enable config lvgl y config lv conf minimal y build and see error expected behavior impact logs and console output environment please complete the following information additional context | 1 |
640,355 | 20,781,351,360 | IssuesEvent | 2022-03-16 15:01:58 | wasmerio/wasmer | https://api.github.com/repos/wasmerio/wasmer | closed | macOS function calls can be optimized | 🎉 enhancement priority-medium | Right now we are using `sigsetjmp` when calling functions from host to wasm, which adds significant overhead.
We can either use `setjmp` or macho exception handling directly.
Performance from `sigsetjmp` to `setjmp` reported here: https://github.com/wasmerio/wasmer/pull/2102 (from 140.14ns to 29.684ns in some examples).
| 1.0 | macOS function calls can be optimized - Right now we are using `sigsetjmp` when calling functions from host to wasm, which adds significant overhead.
We can either use `setjmp` or macho exception handling directly.
Performance from `sigsetjmp` to `setjmp` reported here: https://github.com/wasmerio/wasmer/pull/2102 (from 140.14ns to 29.684ns in some examples).
| priority | macos function calls can be optimized right now we are using sigsetjmp when calling functions from host to wasm which adds significant overhead we can either use setjmp or macho exception handling directly performance from sigsetjmp to setjmp reported here from to in some examples | 1 |
657,992 | 21,874,456,157 | IssuesEvent | 2022-05-19 08:54:00 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Frontend for the Categories | priority-medium status-new practice-app practice-app:front-end | ### Issue Description
Now that we initialized categories, we can begin implementing our pages.
This issue is for implementation of a page regarding listing all of existing categories.
### Step Details
Steps that will be performed:
- [x] Determine what will be shown in the page
- [x] Add the page to the router and the toolbar
- [x] Implement the page with mock data
- [x] Establish communication between the page and the API
### Final Actions
Upon completion, other members can use this as a template for their own pages. If we wish, we can add the functionality that once a category is clicked the lectures containing that category can be listed.
### Deadline of the Issue
17.05.2022 Tuesday 24:00
### Reviewer
Altay Acar
### Deadline for the Review
18.05.2022 Wednesday 24:00 | 1.0 | Frontend for the Categories - ### Issue Description
Now that we initialized categories, we can begin implementing our pages.
This issue is for implementation of a page regarding listing all of existing categories.
### Step Details
Steps that will be performed:
- [x] Determine what will be shown in the page
- [x] Add the page to the router and the toolbar
- [x] Implement the page with mock data
- [x] Establish communication between the page and the API
### Final Actions
Upon completion, other members can use this as a template for their own pages. If we wish, we can add the functionality that once a category is clicked the lectures containing that category can be listed.
### Deadline of the Issue
17.05.2022 Tuesday 24:00
### Reviewer
Altay Acar
### Deadline for the Review
18.05.2022 Wednesday 24:00 | priority | frontend for the categories issue description now that we initialized categories we can begin implementing our pages this issue is for implementation of a page regarding listing all of existing categories step details steps that will be performed determine what will be shown in the page add the page to the router and the toolbar implement the page with mock data establish communication between the page and the api final actions upon completion other members can use this as a template for their own pages if we wish we can add the functionality that once a category is clicked the lectures containing that category can be listed deadline of the issue tuesday reviewer altay acar deadline for the review wednesday | 1 |
509,459 | 14,737,037,968 | IssuesEvent | 2021-01-07 00:41:34 | codidact/qpixel | https://api.github.com/repos/codidact/qpixel | closed | Need a way to specify rep gains for new post types | area: html/css/js area: ruby complexity: unassessed priority: medium type: change request | With post unification, we now have the ability to create new post types, like wiki (already created). Site settings includes a reputation section, which lets an admin specify the rep gained for questions, answers, or articles and the rep lost for downvotes (anywhere). Wikis don't have voting and thus don't affect rep, but other new post types could have voting and therefore need to be configurable.
I suspect the current list is hard-wired, dating back to before we could create new post types. We need a way, in the site settings, to make this dynamic, so that all post types that exist on the community (and have votes) can be configured, for both upvotes and downvotes. (If there's no or low rep for an upvote on some types, it doesn't make sense to keep the same downvote penalty as for other types.)
We have also been asked (https://github.com/codidact/qpixel/issues/303) to make reputation values configurable per category so EE can have higher rep for papers, Code Golf can have no rep for the sandbox, and so on. It would be great if we could address both of these together: let an admin fill out a matrix, post types by categories, specifying both up and down values, or add setting overrides to category configuration (defaulting to the site values if not set). I suspect the latter is easier and that would be good enough.
It is possible that Music is going to want a voting-but-no-rep configuration for identification questions.
| 1.0 | Need a way to specify rep gains for new post types - With post unification, we now have the ability to create new post types, like wiki (already created). Site settings includes a reputation section, which lets an admin specify the rep gained for questions, answers, or articles and the rep lost for downvotes (anywhere). Wikis don't have voting and thus don't affect rep, but other new post types could have voting and therefore need to be configurable.
I suspect the current list is hard-wired, dating back to before we could create new post types. We need a way, in the site settings, to make this dynamic, so that all post types that exist on the community (and have votes) can be configured, for both upvotes and downvotes. (If there's no or low rep for an upvote on some types, it doesn't make sense to keep the same downvote penalty as for other types.)
We have also been asked (https://github.com/codidact/qpixel/issues/303) to make reputation values configurable per category so EE can have higher rep for papers, Code Golf can have no rep for the sandbox, and so on. It would be great if we could address both of these together: let an admin fill out a matrix, post types by categories, specifying both up and down values, or add setting overrides to category configuration (defaulting to the site values if not set). I suspect the latter is easier and that would be good enough.
It is possible that Music is going to want a voting-but-no-rep configuration for identification questions.
| priority | need a way to specify rep gains for new post types with post unification we now have the ability to create new post types like wiki already created site settings includes a reputation section which lets an admin specify the rep gained for questions answers or articles and the rep lost for downvotes anywhere wikis don t have voting and thus don t affect rep but other new post types could have voting and therefore need to be configurable i suspect the current list is hard wired dating back to before we could create new post types we need a way in the site settings to make this dynamic so that all post types that exist on the community and have votes can be configured for both upvotes and downvotes if there s no or low rep for an upvote on some types it doesn t make sense to keep the same downvote penalty as for other types we have also been asked to make reputation values configurable per category so ee can have higher rep for papers code golf can have no rep for the sandbox and so on it would be great if we could address both of these together let an admin fill out a matrix post types by categories specifying both up and down values or add setting overrides to category configuration defaulting to the site values if not set i suspect the latter is easier and that would be good enough it is possible that music is going to want a voting but no rep configuration for identification questions | 1 |
365,096 | 10,775,555,847 | IssuesEvent | 2019-11-03 15:06:47 | AY1920S1-CS2113T-F09-4/main | https://api.github.com/repos/AY1920S1-CS2113T-F09-4/main | closed | Exception message was shown | priority.High severity.Medium status.Ongoing | 
The exception error IndexOutOfBounds Exception message should not be shown.
<hr><sub>[original: JasonLeeWeiHern/ped#8]<br/>
</sub> | 1.0 | Exception message was shown - 
The exception error IndexOutOfBounds Exception message should not be shown.
<hr><sub>[original: JasonLeeWeiHern/ped#8]<br/>
</sub> | priority | exception message was shown the exception error indexoutofbounds exception message should not be shown | 1 |
668,129 | 22,553,720,128 | IssuesEvent | 2022-06-27 08:24:00 | medialab/portic-storymaps-2022 | https://api.github.com/repos/medialab/portic-storymaps-2022 | closed | Linechart improvements | enhancement priority:medium | - [x] handle negative values
- [x] handle vertical layout
- [ ] API : add an option to format axis ticks with pretty numbers (use `misc/formatNumber`) | 1.0 | Linechart improvements - - [x] handle negative values
- [x] handle vertical layout
- [ ] API : add an option to format axis ticks with pretty numbers (use `misc/formatNumber`) | priority | linechart improvements handle negative values handle vertical layout api add an option to format axis ticks with pretty numbers use misc formatnumber | 1 |
71,200 | 3,353,752,632 | IssuesEvent | 2015-11-18 08:30:00 | Apollo-Community/ApolloStation | https://api.github.com/repos/Apollo-Community/ApolloStation | closed | Reagents taking forever to metabolize | bug priority: medium | It seems every time we lower the tickrate, metabolism rates decrease exponentially. I saw someone who was injected with sleep toxin and 10 minutes later they had metabolized less than 0.01 units. | 1.0 | Reagents taking forever to metabolize - It seems every time we lower the tickrate, metabolism rates decrease exponentially. I saw someone who was injected with sleep toxin and 10 minutes later they had metabolized less than 0.01 units. | priority | reagents taking forever to metabolize it seems every time we lower the tickrate metabolism rates decrease exponentially i saw someone who was injected with sleep toxin and minutes later they had metabolized less than units | 1 |
720,190 | 24,782,811,891 | IssuesEvent | 2022-10-24 07:16:55 | trustwallet/wallet-core | https://api.github.com/repos/trustwallet/wallet-core | closed | [NewChain]: add Aptos support | chain-integration priority:medium size:large | **Motivation**
[Aptos](https://aptoslabs.com/) is a proposed Layer 1 blockchain that uses the [Move programming language](https://101blockchains.com/move-programming-language-tutorial/)
The goal is to support the blockchain prior to the main net launch.
**Checklist**
<!--- Group checklist per issue needed, one specific feature of your goal -->
<!--- Each big task can have subtask, doesn't hesitate to split into small pull request to simplify the review process -->
- [x] Skeleton, registry.json update - Implemented in #2595
- [x] Address support - Implemented in #2595
- [x] Transaction support
- [x] Transfer support - Implemented in #2595
- [x] Token transfer support Implemented in #2595
- [x] Script/SmartContract support Implemented in #2595
- [x] TransactionPayload support - Implemented in #2595
- [x] BCS serialization support - Implemented in #2595
- [x] NFT - Implemented in #2613
- [ ] Staking (optional)
**Resources**
<!--- Link resources this way: [My Resource Title](link) -->
[Transfer TypeScript example](https://github.com/aptos-labs/aptos-core/blob/f12c3628d4125e35597b4e6831696a488870b232/ecosystem/typescript/sdk/src/coin_client.ts#L41)
[Transaction Payload](https://github.com/aptos-labs/aptos-core/blob/f12c3628d4125e35597b4e6831696a488870b232/ecosystem/typescript/sdk/src/transaction_builder/builder.ts#L234)
[HRP/ChainID format](https://github.com/aptos-labs/aptos-core/blob/f12c3628d4125e35597b4e6831696a488870b232/ecosystem/typescript/sdk/src/utils/misc.ts#L32)
[Official developer doc](https://aptoslabs.com/developers)
[BCS Clement implementation](https://github.com/doom/wallet-core/tree/bcs)
**Additional context**
There is 3 SDK's: python, rust, typescript
| 1.0 | [NewChain]: add Aptos support - **Motivation**
[Aptos](https://aptoslabs.com/) is a proposed Layer 1 blockchain that uses the [Move programming language](https://101blockchains.com/move-programming-language-tutorial/)
The goal is to support the blockchain prior to the main net launch.
**Checklist**
<!--- Group checklist per issue needed, one specific feature of your goal -->
<!--- Each big task can have subtask, doesn't hesitate to split into small pull request to simplify the review process -->
- [x] Skeleton, registry.json update - Implemented in #2595
- [x] Address support - Implemented in #2595
- [x] Transaction support
- [x] Transfer support - Implemented in #2595
- [x] Token transfer support Implemented in #2595
- [x] Script/SmartContract support Implemented in #2595
- [x] TransactionPayload support - Implemented in #2595
- [x] BCS serialization support - Implemented in #2595
- [x] NFT - Implemented in #2613
- [ ] Staking (optional)
**Resources**
<!--- Link resources this way: [My Resource Title](link) -->
[Transfer TypeScript example](https://github.com/aptos-labs/aptos-core/blob/f12c3628d4125e35597b4e6831696a488870b232/ecosystem/typescript/sdk/src/coin_client.ts#L41)
[Transaction Payload](https://github.com/aptos-labs/aptos-core/blob/f12c3628d4125e35597b4e6831696a488870b232/ecosystem/typescript/sdk/src/transaction_builder/builder.ts#L234)
[HRP/ChainID format](https://github.com/aptos-labs/aptos-core/blob/f12c3628d4125e35597b4e6831696a488870b232/ecosystem/typescript/sdk/src/utils/misc.ts#L32)
[Official developer doc](https://aptoslabs.com/developers)
[BCS Clement implementation](https://github.com/doom/wallet-core/tree/bcs)
**Additional context**
There is 3 SDK's: python, rust, typescript
| priority | add aptos support motivation is a proposed layer blockchain that uses the the goal is to support the blockchain prior to the main net launch checklist skeleton registry json update implemented in address support implemented in transaction support transfer support implemented in token transfer support implemented in script smartcontract support implemented in transactionpayload support implemented in bcs serialization support implemented in nft implemented in staking optional resources additional context there is sdk s python rust typescript | 1 |
391,547 | 11,575,668,886 | IssuesEvent | 2020-02-21 10:14:37 | luna/enso | https://api.github.com/repos/luna/enso | closed | File Manager Binary File Support | Category: GUI Change: Non-Breaking Difficulty: Core Contributor Priority: Medium Type: Enhancement | ### Summary
With the protocol specified in #395 we have a good idea of how we want to handle binary file transfers.
### Value
We give the IDE the tools to provide users with a way to transfer binary files to our backend through the IDE.
### Specification
- [ ] Implement the transport specified as part of #395 for transferring binary files.
- [ ] Implement the messages for transferring binary files using the above transport.
- [ ] Ensure that this mechanism works consistently and reliably with large files.
### Acceptance Criteria & Test Cases
- The above specification has been implemented.
- The functionality has been rigorously tested. | 1.0 | File Manager Binary File Support - ### Summary
With the protocol specified in #395 we have a good idea of how we want to handle binary file transfers.
### Value
We give the IDE the tools to provide users with a way to transfer binary files to our backend through the IDE.
### Specification
- [ ] Implement the transport specified as part of #395 for transferring binary files.
- [ ] Implement the messages for transferring binary files using the above transport.
- [ ] Ensure that this mechanism works consistently and reliably with large files.
### Acceptance Criteria & Test Cases
- The above specification has been implemented.
- The functionality has been rigorously tested. | priority | file manager binary file support summary with the protocol specified in we have a good idea of how we want to handle binary file transfers value we give the ide the tools to provide users with a way to transfer binary files to our backend through the ide specification implement the transport specified as part of for transferring binary files implement the messages for transferring binary files using the above transport ensure that this mechanism works consistently and reliably with large files acceptance criteria test cases the above specification has been implemented the functionality has been rigorously tested | 1 |
170,709 | 6,469,764,340 | IssuesEvent | 2017-08-17 07:10:07 | vmware/admiral | https://api.github.com/repos/vmware/admiral | closed | Problem accessing Admiral service on VIC OVA deployment | kind/bug priority/medium | I have deployed the VIC OVA (build 1dc0021a) using DHCP.
After I enter the vCenter credentials, I cannot access the management portal:
- Service is running on port 8282. I get certificate warning (self-signed) but then I get the error:

- Admiral container logs (captured during failed attempt):
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlRequestSender - SP alias for the login request is 192.168.100.122:8282
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Producing redirect url
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.identity.websso.client.SiteAffinity - Failed to init CdcSession. likely due to missing vmafd jar. Message: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Added Renewable condition
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Added Delegable condition
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Destination URL: https://vcsa-01a.corp.local/websso/SAML2/SSO/vsphere.local
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Relay State value is: SessionId
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoResponseListener - You have POST'ed to Websso client library!
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Validating SAMLResponse..
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.ValidationState - Validating request destination: HttpservletRequest destination=https://192.168.100.122:8282/auth/psc/callback/tokenSAML message destination=https://192.168.100.122:8282/auth/psc/callback/token
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Validating optional request ID: _759b5671ba352d374c59f0c63eebdcb8
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Validating assertion..
[376][I][2017-08-11T15:50:24.634Z][286][HttpServletRequestImpl][breakHere][HttpServletRequestResponse]
[377][I][2017-08-11T15:50:24.637Z][286][HttpServletRequestImpl][breakHere][HttpServletRequestResponse]
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Parsing assertion..
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SamlUtils - Validate assertion condition with clock tolerance = 600
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - NameID: Administrator@CORP.LOCAL
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - NameIDFormat: http://schemas.xmlsoap.org/claims/UPN
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SamlUtils - Validate sessionNotOnOrAfter with clock tolerance = 600
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Successfully validated SSO Assertion
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Successfully validated received SAMLResponse
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - Message Data.Issuer: 'https://vcsa-01a.corp.local/websso/SAML2/Metadata/vsphere.local', Subject: 'Administrator@CORP.LOCAL', Session: '_81985f6701bc5f119b908f9f41600983', SessionId: 'SessionId'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - Going to extract SAML token for 'Administrator@CORP.LOCAL'.
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=Administrator@CORP.LOCAL, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - SAML token successfully extracted.Issuer: 'https://vcsa-01a.corp.local/websso/SAML2/Metadata/vsphere.local', Subject: '{Name: Administrator, Domain: CORP.LOCAL}', Valid: 'Fri Aug 11 15:49:09 GMT 2017' - 'Fri Aug 11 15:54:09 GMT 2017', SamlSession: '_81985f6701bc5f119b908f9f41600983'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - Attempts to authenticate extracted token for '{Name: Administrator, Domain: CORP.LOCAL}'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.vim.sso.client.impl.SiteAffinityServiceDiscovery - CDC not configured java.lang.NoClassDefFoundError: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=Administrator@CORP.LOCAL, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.vim.sso.client.impl.SecurityTokenServiceImpl - Successfully acquired token for user: {Name: Administrator, Domain: CORP.LOCAL}
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.vim.sso.client.impl.SiteAffinityServiceDiscovery - CDC not configured java.lang.NoClassDefFoundError: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=Administrator@CORP.LOCAL, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.vim.sso.client.impl.SecurityTokenServiceImpl - Successfully renewed token for user: {Name: Administrator, Domain: CORP.LOCAL}
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - SAML HOK token successfully extracted.Issuer: 'https://vcsa-01a.corp.local/websso/SAML2/Metadata/vsphere.local', Subject: '{Name: Administrator, Domain: CORP.LOCAL}', Valid: 'Fri Aug 11 15:50:24 GMT 2017' - 'Sun Sep 10 15:50:24 GMT 2017', Session: '_81985f6701bc5f119b908f9f41600983'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - SAML groups: '[{Name: Domain Admins, Domain: corp.local}, {Name: Domain Users, Domain: corp.local}, {Name: Group Policy Creator Owners, Domain: corp.local}, {Name: Schema Admins, Domain: corp.local}, {Name: Enterprise Admins, Domain: corp.local}, {Name: View Agent Direct-Connection Users, Domain: corp.local}, {Name: Denied RODC Password Replication Group, Domain: corp.local}, {Name: Administrators, Domain: vsphere.local}, {Name: Everyone, Domain: vsphere.local}]'
[378][I][2017-08-11T15:50:25.318Z][286][AbstractClient][dispose][Client was disposed successfully]
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.vim.sso.client.impl.SiteAffinityServiceDiscovery - CDC not configured java.lang.NoClassDefFoundError: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=admiral-c22367d0-8a21-411f-84ae-ec1572a35999@vsphere.local, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.vim.sso.client.impl.SecurityTokenServiceImpl - Successfully acquired token for user: {Name: admiral-c22367d0-8a21-411f-84ae-ec1572a35999, Domain: vsphere.local}
[379][I][2017-08-11T15:50:25.643Z][286][AdminClientImpl][<init>][Client was created successfully]
[380][I][2017-08-11T15:50:25.740Z][286][AdminClientImpl][<init>][Client was created successfully]
[381][W][2017-08-11T15:50:25.794Z][25][8282/][processPendingServiceAvailableOperations][Service /auth/psc/sessions/e2c29d7c-d6b7-41a0-b9c4-0a8a962eb3e5-15dd1fd4ecc failed start: com.vmware.xenon.common.LocalizableValidationException: 'principalName' cannot be empty]
Issue is reproducible in my lab. | 1.0 | Problem accessing Admiral service on VIC OVA deployment - I have deployed the VIC OVA (build 1dc0021a) using DHCP.
After I enter the vCenter credentials, I cannot access the management portal:
- Service is running on port 8282. I get certificate warning (self-signed) but then I get the error:

- Admiral container logs (captured during failed attempt):
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlRequestSender - SP alias for the login request is 192.168.100.122:8282
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Producing redirect url
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.identity.websso.client.SiteAffinity - Failed to init CdcSession. likely due to missing vmafd jar. Message: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Added Renewable condition
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Added Delegable condition
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Destination URL: https://vcsa-01a.corp.local/websso/SAML2/SSO/vsphere.local
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoRequestSender - Relay State value is: SessionId
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.endpoint.SsoResponseListener - You have POST'ed to Websso client library!
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Validating SAMLResponse..
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.ValidationState - Validating request destination: HttpservletRequest destination=https://192.168.100.122:8282/auth/psc/callback/tokenSAML message destination=https://192.168.100.122:8282/auth/psc/callback/token
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Validating optional request ID: _759b5671ba352d374c59f0c63eebdcb8
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Validating assertion..
[376][I][2017-08-11T15:50:24.634Z][286][HttpServletRequestImpl][breakHere][HttpServletRequestResponse]
[377][I][2017-08-11T15:50:24.637Z][286][HttpServletRequestImpl][breakHere][HttpServletRequestResponse]
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Parsing assertion..
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SamlUtils - Validate assertion condition with clock tolerance = 600
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - NameID: Administrator@CORP.LOCAL
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - NameIDFormat: http://schemas.xmlsoap.org/claims/UPN
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SamlUtils - Validate sessionNotOnOrAfter with clock tolerance = 600
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Successfully validated SSO Assertion
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.websso.client.SsoValidationState - Successfully validated received SAMLResponse
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - Message Data.Issuer: 'https://vcsa-01a.corp.local/websso/SAML2/Metadata/vsphere.local', Subject: 'Administrator@CORP.LOCAL', Session: '_81985f6701bc5f119b908f9f41600983', SessionId: 'SessionId'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - Going to extract SAML token for 'Administrator@CORP.LOCAL'.
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=Administrator@CORP.LOCAL, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - SAML token successfully extracted.Issuer: 'https://vcsa-01a.corp.local/websso/SAML2/Metadata/vsphere.local', Subject: '{Name: Administrator, Domain: CORP.LOCAL}', Valid: 'Fri Aug 11 15:49:09 GMT 2017' - 'Fri Aug 11 15:54:09 GMT 2017', SamlSession: '_81985f6701bc5f119b908f9f41600983'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - Attempts to authenticate extracted token for '{Name: Administrator, Domain: CORP.LOCAL}'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.vim.sso.client.impl.SiteAffinityServiceDiscovery - CDC not configured java.lang.NoClassDefFoundError: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=Administrator@CORP.LOCAL, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.vim.sso.client.impl.SecurityTokenServiceImpl - Successfully acquired token for user: {Name: Administrator, Domain: CORP.LOCAL}
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.vim.sso.client.impl.SiteAffinityServiceDiscovery - CDC not configured java.lang.NoClassDefFoundError: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=Administrator@CORP.LOCAL, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.vim.sso.client.impl.SecurityTokenServiceImpl - Successfully renewed token for user: {Name: Administrator, Domain: CORP.LOCAL}
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - SAML HOK token successfully extracted.Issuer: 'https://vcsa-01a.corp.local/websso/SAML2/Metadata/vsphere.local', Subject: '{Name: Administrator, Domain: CORP.LOCAL}', Valid: 'Fri Aug 11 15:50:24 GMT 2017' - 'Sun Sep 10 15:50:24 GMT 2017', Session: '_81985f6701bc5f119b908f9f41600983'
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.admiral.auth.idm.psc.saml.sso.authentication.SamlLogonProcessor - SAML groups: '[{Name: Domain Admins, Domain: corp.local}, {Name: Domain Users, Domain: corp.local}, {Name: Group Policy Creator Owners, Domain: corp.local}, {Name: Schema Admins, Domain: corp.local}, {Name: Enterprise Admins, Domain: corp.local}, {Name: View Agent Direct-Connection Users, Domain: corp.local}, {Name: Denied RODC Password Replication Group, Domain: corp.local}, {Name: Administrators, Domain: vsphere.local}, {Name: Everyone, Domain: vsphere.local}]'
[378][I][2017-08-11T15:50:25.318Z][286][AbstractClient][dispose][Client was disposed successfully]
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] WARN com.vmware.vim.sso.client.impl.SiteAffinityServiceDiscovery - CDC not configured java.lang.NoClassDefFoundError: com/vmware/identity/cdc/CdcFactory
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.identity.token.impl.SamlTokenImpl - SAML token for SubjectNameId [value=admiral-c22367d0-8a21-411f-84ae-ec1572a35999@vsphere.local, format=http://schemas.xmlsoap.org/claims/UPN] successfully parsed from Element
[https://172.17.0.2:8282/ForkJoinPool-1-worker-0] INFO com.vmware.vim.sso.client.impl.SecurityTokenServiceImpl - Successfully acquired token for user: {Name: admiral-c22367d0-8a21-411f-84ae-ec1572a35999, Domain: vsphere.local}
[379][I][2017-08-11T15:50:25.643Z][286][AdminClientImpl][<init>][Client was created successfully]
[380][I][2017-08-11T15:50:25.740Z][286][AdminClientImpl][<init>][Client was created successfully]
[381][W][2017-08-11T15:50:25.794Z][25][8282/][processPendingServiceAvailableOperations][Service /auth/psc/sessions/e2c29d7c-d6b7-41a0-b9c4-0a8a962eb3e5-15dd1fd4ecc failed start: com.vmware.xenon.common.LocalizableValidationException: 'principalName' cannot be empty]
Issue is reproducible in my lab. | priority | problem accessing admiral service on vic ova deployment i have deployed the vic ova build using dhcp after i enter the vcenter credentials i cannot access the management portal service is running on port i get certificate warning self signed but then i get the error admiral container logs captured during failed attempt info com vmware admiral auth idm psc saml sso authentication samlrequestsender sp alias for the login request is info com vmware identity websso client endpoint ssorequestsender producing redirect url warn com vmware identity websso client siteaffinity failed to init cdcsession likely due to missing vmafd jar message com vmware identity cdc cdcfactory info com vmware identity websso client endpoint ssorequestsender added renewable condition info com vmware identity websso client endpoint ssorequestsender added delegable condition info com vmware identity websso client endpoint ssorequestsender destination url info com vmware identity websso client endpoint ssorequestsender relay state value is sessionid info com vmware identity websso client endpoint ssoresponselistener you have post ed to websso client library info com vmware identity websso client ssovalidationstate validating samlresponse info com vmware identity websso client validationstate validating request destination httpservletrequest destination message destination info com vmware identity websso client ssovalidationstate validating optional request id info com vmware identity websso client ssovalidationstate validating assertion info com vmware identity websso client ssovalidationstate parsing assertion info com vmware identity websso client samlutils validate assertion condition with clock tolerance info com vmware identity websso client ssovalidationstate nameid administrator corp local info com vmware identity websso client ssovalidationstate nameidformat info com vmware identity websso client samlutils validate sessionnotonorafter with clock tolerance info com vmware identity websso client ssovalidationstate successfully validated sso assertion info com vmware identity websso client ssovalidationstate successfully validated received samlresponse info com vmware admiral auth idm psc saml sso authentication samllogonprocessor message data issuer subject administrator corp local session sessionid sessionid info com vmware admiral auth idm psc saml sso authentication samllogonprocessor going to extract saml token for administrator corp local info com vmware identity token impl samltokenimpl saml token for subjectnameid successfully parsed from element info com vmware admiral auth idm psc saml sso authentication samllogonprocessor saml token successfully extracted issuer subject name administrator domain corp local valid fri aug gmt fri aug gmt samlsession info com vmware admiral auth idm psc saml sso authentication samllogonprocessor attempts to authenticate extracted token for name administrator domain corp local warn com vmware vim sso client impl siteaffinityservicediscovery cdc not configured java lang noclassdeffounderror com vmware identity cdc cdcfactory info com vmware identity token impl samltokenimpl saml token for subjectnameid successfully parsed from element info com vmware vim sso client impl securitytokenserviceimpl successfully acquired token for user name administrator domain corp local warn com vmware vim sso client impl siteaffinityservicediscovery cdc not configured java lang noclassdeffounderror com vmware identity cdc cdcfactory info com vmware identity token impl samltokenimpl saml token for subjectnameid successfully parsed from element info com vmware vim sso client impl securitytokenserviceimpl successfully renewed token for user name administrator domain corp local info com vmware admiral auth idm psc saml sso authentication samllogonprocessor saml hok token successfully extracted issuer subject name administrator domain corp local valid fri aug gmt sun sep gmt session info com vmware admiral auth idm psc saml sso authentication samllogonprocessor saml groups warn com vmware vim sso client impl siteaffinityservicediscovery cdc not configured java lang noclassdeffounderror com vmware identity cdc cdcfactory info com vmware identity token impl samltokenimpl saml token for subjectnameid successfully parsed from element info com vmware vim sso client impl securitytokenserviceimpl successfully acquired token for user name admiral domain vsphere local issue is reproducible in my lab | 1 |
250,779 | 7,987,579,699 | IssuesEvent | 2018-07-19 08:17:29 | fog/fog-google | https://api.github.com/repos/fog/fog-google | closed | get and other business logic should be DRY-ed up in the models | bug priority/medium | This issue is coming from the pain point that different resources behave differently when `#get('nonexistent-identity')` is called:
- for `Addresses#get`, `Servers#get`, and others, if the resource isn't found, it returns `nil`, (this seems to be the preferred behavior,) whereas
- for `UrlMaps#get`, `TargetHttpProxies#get`, and others, if the resource isn't found, it throws a `Fog::Errors::NotFound`.
This particular issue has been patched up in ihmccreery/fog-google@4e1d5dd36c2f8f174a126ed35a573fc971f1b954 and others, but it should be solved more permanently by DRY-ing up the duplicated business logic, (as well as implementing more consistent tests).
It's worth noting that these are _breaking changes_, but they are minor enough that I'm willing to put them in v0.1, though I'm happy to hear dissent. It will require some serious workarounds to properly test, (per the work I've been doing moving to Minitest,) if we decide not to change the behavior until v1.
| 1.0 | get and other business logic should be DRY-ed up in the models - This issue is coming from the pain point that different resources behave differently when `#get('nonexistent-identity')` is called:
- for `Addresses#get`, `Servers#get`, and others, if the resource isn't found, it returns `nil`, (this seems to be the preferred behavior,) whereas
- for `UrlMaps#get`, `TargetHttpProxies#get`, and others, if the resource isn't found, it throws a `Fog::Errors::NotFound`.
This particular issue has been patched up in ihmccreery/fog-google@4e1d5dd36c2f8f174a126ed35a573fc971f1b954 and others, but it should be solved more permanently by DRY-ing up the duplicated business logic, (as well as implementing more consistent tests).
It's worth noting that these are _breaking changes_, but they are minor enough that I'm willing to put them in v0.1, though I'm happy to hear dissent. It will require some serious workarounds to properly test, (per the work I've been doing moving to Minitest,) if we decide not to change the behavior until v1.
| priority | get and other business logic should be dry ed up in the models this issue is coming from the pain point that different resources behave differently when get nonexistent identity is called for addresses get servers get and others if the resource isn t found it returns nil this seems to be the preferred behavior whereas for urlmaps get targethttpproxies get and others if the resource isn t found it throws a fog errors notfound this particular issue has been patched up in ihmccreery fog google and others but it should be solved more permanently by dry ing up the duplicated business logic as well as implementing more consistent tests it s worth noting that these are breaking changes but they are minor enough that i m willing to put them in though i m happy to hear dissent it will require some serious workarounds to properly test per the work i ve been doing moving to minitest if we decide not to change the behavior until | 1 |
619,071 | 19,515,244,631 | IssuesEvent | 2021-12-29 09:06:39 | edwisely-ai/Marketing | https://api.github.com/repos/edwisely-ai/Marketing | closed | Blog - Create Help Documents for Product | Criticality Low Priority Medium | 1. Make also blogs like Why did we releasing a Web App or something like that. | 1.0 | Blog - Create Help Documents for Product - 1. Make also blogs like Why did we releasing a Web App or something like that. | priority | blog create help documents for product make also blogs like why did we releasing a web app or something like that | 1 |
687,645 | 23,533,933,088 | IssuesEvent | 2022-08-19 18:18:26 | cthit/Gamma | https://api.github.com/repos/cthit/Gamma | closed | Add monitoring for backend | Type: Enhancement good first issue Where: Backend Priority: Medium | Should be pretty easy. You can add a dependency to Spring, with actuators. Then you only need to protect the endpoint | 1.0 | Add monitoring for backend - Should be pretty easy. You can add a dependency to Spring, with actuators. Then you only need to protect the endpoint | priority | add monitoring for backend should be pretty easy you can add a dependency to spring with actuators then you only need to protect the endpoint | 1 |
756,353 | 26,467,865,370 | IssuesEvent | 2023-01-17 02:52:27 | OffprintStudios/Sailfish | https://api.github.com/repos/OffprintStudios/Sailfish | closed | Fix scroll on mobile | bug medium priority | currently, the mobile view auto-scrolls to the top of the content on navigation and hides the nav bar. this needs to be fixed | 1.0 | Fix scroll on mobile - currently, the mobile view auto-scrolls to the top of the content on navigation and hides the nav bar. this needs to be fixed | priority | fix scroll on mobile currently the mobile view auto scrolls to the top of the content on navigation and hides the nav bar this needs to be fixed | 1 |
78,148 | 3,509,486,432 | IssuesEvent | 2016-01-08 23:00:56 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Windfury Totem (BB #917) | Category: Spells migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** cent0s
**Original Date:** 26.05.2015 23:20:09 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/917
<hr>
[http://wowwiki.wikia.com/Windfury_Totem](http://wowwiki.wikia.com/Windfury_Totem)
No chance of proc, proceed almost constantly as the machine | 1.0 | Windfury Totem (BB #917) - This issue was migrated from bitbucket.
**Original Reporter:** cent0s
**Original Date:** 26.05.2015 23:20:09 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/917
<hr>
[http://wowwiki.wikia.com/Windfury_Totem](http://wowwiki.wikia.com/Windfury_Totem)
No chance of proc, proceed almost constantly as the machine | priority | windfury totem bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state invalid direct link no chance of proc proceed almost constantly as the machine | 1 |
578,966 | 17,169,343,453 | IssuesEvent | 2021-07-15 00:22:09 | AtlasOfLivingAustralia/biocache-service | https://api.github.com/repos/AtlasOfLivingAustralia/biocache-service | closed | Improve downloads multimedia implementation | Downloads enhancement images priority-medium | Download API provides an option to specify `extra=multimedia,image_url` to be able to get image info in download file. There are two problems with this:
- the `image_url` field is just an `image_uuid` string and thus relies on the client to encode a URL themselves, which is not documented anywhere that I could find. Ben Raymond has brought this to our attention.
- it doesn't work for non-image multimedia types, such as sounds or video files - fields are empty in the download CSV. Ben also highlighted this issue.
Suggestion is to change the field from `image_url` to `multimedia_url` (and the associated `image_url_all`). To also provide the full URL not just the image UUID. This is important for external users of the ALA API and means the service would be a proper REST service (must have URIs).
| 1.0 | Improve downloads multimedia implementation - Download API provides an option to specify `extra=multimedia,image_url` to be able to get image info in download file. There are two problems with this:
- the `image_url` field is just an `image_uuid` string and thus relies on the client to encode a URL themselves, which is not documented anywhere that I could find. Ben Raymond has brought this to our attention.
- it doesn't work for non-image multimedia types, such as sounds or video files - fields are empty in the download CSV. Ben also highlighted this issue.
Suggestion is to change the field from `image_url` to `multimedia_url` (and the associated `image_url_all`). To also provide the full URL not just the image UUID. This is important for external users of the ALA API and means the service would be a proper REST service (must have URIs).
| priority | improve downloads multimedia implementation download api provides an option to specify extra multimedia image url to be able to get image info in download file there are two problems with this the image url field is just an image uuid string and thus relies on the client to encode a url themselves which is not documented anywhere that i could find ben raymond has brought this to our attention it doesn t work for non image multimedia types such as sounds or video files fields are empty in the download csv ben also highlighted this issue suggestion is to change the field from image url to multimedia url and the associated image url all to also provide the full url not just the image uuid this is important for external users of the ala api and means the service would be a proper rest service must have uris | 1 |
203,042 | 7,057,410,193 | IssuesEvent | 2018-01-04 16:21:32 | ImageEngine/cortex | https://api.github.com/repos/ImageEngine/cortex | closed | Support GeometricData::Interpretation in IECoreRI | priority-medium renderer-RenderMan type-enhancement | This needs to be used in ParameterList and PrimitiveVariableList to decide the renderman type of the data. Once this is done we should entirely abandon the typeHints system, which should simplify both those classes and also the Renderer.
| 1.0 | Support GeometricData::Interpretation in IECoreRI - This needs to be used in ParameterList and PrimitiveVariableList to decide the renderman type of the data. Once this is done we should entirely abandon the typeHints system, which should simplify both those classes and also the Renderer.
| priority | support geometricdata interpretation in iecoreri this needs to be used in parameterlist and primitivevariablelist to decide the renderman type of the data once this is done we should entirely abandon the typehints system which should simplify both those classes and also the renderer | 1 |
280,104 | 8,678,061,135 | IssuesEvent | 2018-11-30 18:42:41 | linterhub/usage-parser | https://api.github.com/repos/linterhub/usage-parser | closed | Refactoring of junk code | Priority: Medium Status: In Progress Type: Maintenance | Next solutions isn't good:
* function handle of handle.js file
```
const context = require('./template/context.js');
context.options = [];
```
* function templatizer of templatizer.js file
```
const argumentsTemplate = JSON.parse(fs.readFileSync('./src/template/args.json'));
argumentsTemplate.definitions.arguments.properties = {};
```
It it necessary, because javascript filling empty templates, and for each next iteration of parser, js using this filled templates. Please, find best solution, how to fix it | 1.0 | Refactoring of junk code - Next solutions isn't good:
* function handle of handle.js file
```
const context = require('./template/context.js');
context.options = [];
```
* function templatizer of templatizer.js file
```
const argumentsTemplate = JSON.parse(fs.readFileSync('./src/template/args.json'));
argumentsTemplate.definitions.arguments.properties = {};
```
It it necessary, because javascript filling empty templates, and for each next iteration of parser, js using this filled templates. Please, find best solution, how to fix it | priority | refactoring of junk code next solutions isn t good function handle of handle js file const context require template context js context options function templatizer of templatizer js file const argumentstemplate json parse fs readfilesync src template args json argumentstemplate definitions arguments properties it it necessary because javascript filling empty templates and for each next iteration of parser js using this filled templates please find best solution how to fix it | 1 |
361,386 | 10,708,002,726 | IssuesEvent | 2019-10-24 18:42:34 | Gelbpunkt/IdleRPG | https://api.github.com/repos/Gelbpunkt/IdleRPG | closed | Profile idea | Priority: Medium enhancement | **Is your feature request related to a problem? Please describe.**
This isn't to a problem but a suggestion. Showing race and god on the profile.
**Describe the solution you'd like**
Not sure where it should go but it would be nice to see my race and god on my profile. Maybe next to where class is on the profile.
| 1.0 | Profile idea - **Is your feature request related to a problem? Please describe.**
This isn't to a problem but a suggestion. Showing race and god on the profile.
**Describe the solution you'd like**
Not sure where it should go but it would be nice to see my race and god on my profile. Maybe next to where class is on the profile.
| priority | profile idea is your feature request related to a problem please describe this isn t to a problem but a suggestion showing race and god on the profile describe the solution you d like not sure where it should go but it would be nice to see my race and god on my profile maybe next to where class is on the profile | 1 |
57,545 | 3,082,706,470 | IssuesEvent | 2015-08-24 00:22:26 | magro/memcached-session-manager | https://api.github.com/repos/magro/memcached-session-manager | closed | what kind of service should I use for memcached-session manager | bug imported invalid Priority-Medium | _From [xiaolian...@gmail.com](https://code.google.com/u/111950605265423305308/) on June 27, 2012 05:00:57_
membase or couchbase?
and another quesition:
when I use 2 or more service to support sticky session ,the services should be a cluster? Or just individual service?
the same question to non-sticky session configuration.
thanks
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=146_ | 1.0 | what kind of service should I use for memcached-session manager - _From [xiaolian...@gmail.com](https://code.google.com/u/111950605265423305308/) on June 27, 2012 05:00:57_
membase or couchbase?
and another quesition:
when I use 2 or more service to support sticky session ,the services should be a cluster? Or just individual service?
the same question to non-sticky session configuration.
thanks
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=146_ | priority | what kind of service should i use for memcached session manager from on june membase or couchbase and another quesition when i use or more service to support sticky session the services should be a cluster or just individual service the same question to non sticky session configuration thanks original issue | 1 |
177,976 | 6,589,171,841 | IssuesEvent | 2017-09-14 07:51:50 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | opened | Migration for reimbursement | kind/task priority/medium project/reimbursement |
I want to have full DB's in all environments (dev, demo, etc.)
So that it is necessary to prepare migration:
- add **PHARMACY** to il.dictionaries “Legal_entity_type”
```
update dictionaries set values=('{"MIS": "Medical Information system", "MSP": "заклад з надання медичних послуг", "PHARMACY": "Аптека"}') where name='LEGAL_ENTITY_TYPE';
```
- add roles - **PHARMACY_OWNER**, **PHARMACIST** to mithril.roles
```
insert into roles VALUES
('2beccbb1-eeec-4eae-834b-ea7c12388f1b'
,'PHARMACY_OWNER'
,'employee:read employee:write employee:details employee:deactivate employee_request:approve employee_request:read employee_request:write employee_request:reject legal_entity:read otp:write otp:read declaration:read division:read division:write division:details division:activate division:deactivate'
,now()
,now());
insert into roles VALUES
('f189aec7-1939-4f21-9ba7-7b7b66b233e3'
,'PHARMACIST'
,'employee:read employee_request:approve employee_request:read employee_request:reject legal_entity:read declaration:read division:read'
,now()
,now());
```
- add HR scopes to mithril.roles **emloyee_request:write**
```
update roles
set scope = 'legal_entity:read employee_request:read employee_request:write employee:read employee:write employee:details employee:deactivate division:read division:details'
where name ='HR';
```
- add kveds allowed for pharmacy **KVEDS_ALLOWED_PHARMACY** to il.dictionaries
```
insert into dictionaries VALUES
('KVEDS_ALLOWED_PHARMACY'
,'{"47.73": "Роздрібна торгівля фармацевтичними товарами в спеціалізованих магазинах"}'
,'["SYSTEM", "EXTERNAL"]'
,true);
```
- add new values **DRUGSTORE**, **DRUGSTORE2** il.dictionaries
```
update dictionaries
set values=('{"FAP": "ФАП", "CLINIC": "Філія (інший відокремлений підрозділ)", "AMBULANT_CLINIC": "Амбулаторія","DRUGSTORE":"Аптека", "DRUGSTORE2":"Аптечний пункт"}')
where name='DIVISION_TYPE';
```
| 1.0 | Migration for reimbursement -
I want to have full DB's in all environments (dev, demo, etc.)
So that it is necessary to prepare migration:
- add **PHARMACY** to il.dictionaries “Legal_entity_type”
```
update dictionaries set values=('{"MIS": "Medical Information system", "MSP": "заклад з надання медичних послуг", "PHARMACY": "Аптека"}') where name='LEGAL_ENTITY_TYPE';
```
- add roles - **PHARMACY_OWNER**, **PHARMACIST** to mithril.roles
```
insert into roles VALUES
('2beccbb1-eeec-4eae-834b-ea7c12388f1b'
,'PHARMACY_OWNER'
,'employee:read employee:write employee:details employee:deactivate employee_request:approve employee_request:read employee_request:write employee_request:reject legal_entity:read otp:write otp:read declaration:read division:read division:write division:details division:activate division:deactivate'
,now()
,now());
insert into roles VALUES
('f189aec7-1939-4f21-9ba7-7b7b66b233e3'
,'PHARMACIST'
,'employee:read employee_request:approve employee_request:read employee_request:reject legal_entity:read declaration:read division:read'
,now()
,now());
```
- add HR scopes to mithril.roles **emloyee_request:write**
```
update roles
set scope = 'legal_entity:read employee_request:read employee_request:write employee:read employee:write employee:details employee:deactivate division:read division:details'
where name ='HR';
```
- add kveds allowed for pharmacy **KVEDS_ALLOWED_PHARMACY** to il.dictionaries
```
insert into dictionaries VALUES
('KVEDS_ALLOWED_PHARMACY'
,'{"47.73": "Роздрібна торгівля фармацевтичними товарами в спеціалізованих магазинах"}'
,'["SYSTEM", "EXTERNAL"]'
,true);
```
- add new values **DRUGSTORE**, **DRUGSTORE2** il.dictionaries
```
update dictionaries
set values=('{"FAP": "ФАП", "CLINIC": "Філія (інший відокремлений підрозділ)", "AMBULANT_CLINIC": "Амбулаторія","DRUGSTORE":"Аптека", "DRUGSTORE2":"Аптечний пункт"}')
where name='DIVISION_TYPE';
```
| priority | migration for reimbursement i want to have full db s in all environments dev demo etc so that it is necessary to prepare migration add pharmacy to il dictionaries “legal entity type” update dictionaries set values mis medical information system msp заклад з надання медичних послуг pharmacy аптека where name legal entity type add roles pharmacy owner pharmacist to mithril roles insert into roles values eeec pharmacy owner employee read employee write employee details employee deactivate employee request approve employee request read employee request write employee request reject legal entity read otp write otp read declaration read division read division write division details division activate division deactivate now now insert into roles values pharmacist employee read employee request approve employee request read employee request reject legal entity read declaration read division read now now add hr scopes to mithril roles emloyee request write update roles set scope legal entity read employee request read employee request write employee read employee write employee details employee deactivate division read division details where name hr add kveds allowed for pharmacy kveds allowed pharmacy to il dictionaries insert into dictionaries values kveds allowed pharmacy роздрібна торгівля фармацевтичними товарами в спеціалізованих магазинах true add new values drugstore il dictionaries update dictionaries set values fap фап clinic філія інший відокремлений підрозділ ambulant clinic амбулаторія drugstore аптека аптечний пункт where name division type | 1 |
261,880 | 8,247,256,960 | IssuesEvent | 2018-09-11 15:04:46 | trimstray/htrace.sh | https://api.github.com/repos/trimstray/htrace.sh | closed | Added ssl owner 'Organization' and 'OrganizationalUnit'. | Priority: Medium Status: Completed Type: Feature | - `_ssl_domain_subject_o`
- `_ssl_domain_subject_ou` | 1.0 | Added ssl owner 'Organization' and 'OrganizationalUnit'. - - `_ssl_domain_subject_o`
- `_ssl_domain_subject_ou` | priority | added ssl owner organization and organizationalunit ssl domain subject o ssl domain subject ou | 1 |
35,832 | 2,793,218,975 | IssuesEvent | 2015-05-11 09:28:08 | umutafacan/bounswe2015group3 | https://api.github.com/repos/umutafacan/bounswe2015group3 | closed | Crowdsourcing software development and creative work | auto-migrated duplicate Priority-Medium Type-Task | ```
I am gonna do my research on Crowdsourcing, especially on the topics of
"Crowdsourcing creative work" and "Crowdsourcing software development"
Estimated time: 3 hours of research and 1.5 hours of documenting. 4.5 hours in
total
```
Original issue reported on code.google.com by `ozn....@gmail.com` on 22 Feb 2015 at 2:10 | 1.0 | Crowdsourcing software development and creative work - ```
I am gonna do my research on Crowdsourcing, especially on the topics of
"Crowdsourcing creative work" and "Crowdsourcing software development"
Estimated time: 3 hours of research and 1.5 hours of documenting. 4.5 hours in
total
```
Original issue reported on code.google.com by `ozn....@gmail.com` on 22 Feb 2015 at 2:10 | priority | crowdsourcing software development and creative work i am gonna do my research on crowdsourcing especially on the topics of crowdsourcing creative work and crowdsourcing software development estimated time hours of research and hours of documenting hours in total original issue reported on code google com by ozn gmail com on feb at | 1 |
32,590 | 2,756,221,566 | IssuesEvent | 2015-04-27 06:19:15 | bwapi/bwapi | https://api.github.com/repos/bwapi/bwapi | closed | Unloading a unit causes them to be in an attack frame | bug Priority-Medium | Unloading an SCV from a Dropship causes `isAttackFrame()` to return true.
| 1.0 | Unloading a unit causes them to be in an attack frame - Unloading an SCV from a Dropship causes `isAttackFrame()` to return true.
| priority | unloading a unit causes them to be in an attack frame unloading an scv from a dropship causes isattackframe to return true | 1 |
169,319 | 6,399,430,330 | IssuesEvent | 2017-08-05 00:18:05 | phetsims/kite | https://api.github.com/repos/phetsims/kite | closed | SVG-style ellipticalArcTo not implemented | dev:enhancement priority:3-medium | This is needed for full SVG path handling generated by our parser.
| 1.0 | SVG-style ellipticalArcTo not implemented - This is needed for full SVG path handling generated by our parser.
| priority | svg style ellipticalarcto not implemented this is needed for full svg path handling generated by our parser | 1 |
525,154 | 15,239,121,981 | IssuesEvent | 2021-02-19 03:38:31 | actually-colab/editor | https://api.github.com/repos/actually-colab/editor | opened | Endpoint Input Schema Validation | REST difficulty: medium priority: low server | - [ ] Create a validation middleware
- [ ] Implement on POST /notebook
- [ ] Implement on POST/notebook/:id/share
- [ ] Implement on GET /notebooks | 1.0 | Endpoint Input Schema Validation - - [ ] Create a validation middleware
- [ ] Implement on POST /notebook
- [ ] Implement on POST/notebook/:id/share
- [ ] Implement on GET /notebooks | priority | endpoint input schema validation create a validation middleware implement on post notebook implement on post notebook id share implement on get notebooks | 1 |
192,312 | 6,848,513,448 | IssuesEvent | 2017-11-13 18:47:07 | osuosl/streamwebs | https://api.github.com/repos/osuosl/streamwebs | opened | Unit Typo: In view soil survey distance is incorrectly labeled | bug medium priority | In the View Soil Survey view the "Distance from stream" is labeled as having units of feet when the code has converted feet to meters. The unit label just needs to be changed from _ft_ to _m_. | 1.0 | Unit Typo: In view soil survey distance is incorrectly labeled - In the View Soil Survey view the "Distance from stream" is labeled as having units of feet when the code has converted feet to meters. The unit label just needs to be changed from _ft_ to _m_. | priority | unit typo in view soil survey distance is incorrectly labeled in the view soil survey view the distance from stream is labeled as having units of feet when the code has converted feet to meters the unit label just needs to be changed from ft to m | 1 |
666,669 | 22,363,065,484 | IssuesEvent | 2022-06-15 23:05:08 | diffgram/diffgram | https://api.github.com/repos/diffgram/diffgram | reopened | Fix hard crash if blob can't load | medium-priority | In some cases lifecylcle rules can break a task from rendering properly:
https://diffgram.com/task/99297
Double check if all urls are regenerating correctly and that the error handling does not block the task from being rendered. | 1.0 | Fix hard crash if blob can't load - In some cases lifecylcle rules can break a task from rendering properly:
https://diffgram.com/task/99297
Double check if all urls are regenerating correctly and that the error handling does not block the task from being rendered. | priority | fix hard crash if blob can t load in some cases lifecylcle rules can break a task from rendering properly double check if all urls are regenerating correctly and that the error handling does not block the task from being rendered | 1 |
56,561 | 3,080,249,941 | IssuesEvent | 2015-08-21 20:57:43 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Некорректно отображается прогресс обновления в статусной строке Win7. | bug Component-UI imported Priority-Medium | _From [Tirael...@gmail.com](https://code.google.com/u/108935377450235604965/) on May 23, 2012 19:30:06_
При автообновлении флай постоянно показывает заполненную стоку прогресса. Win7 x64 flylink x64 самый последний билд с сервера Night Orion. Проявилось уже достаточно давно, некогда было написать.
**Attachment:** [Снимок.png](http://code.google.com/p/flylinkdc/issues/detail?id=758)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=758_ | 1.0 | Некорректно отображается прогресс обновления в статусной строке Win7. - _From [Tirael...@gmail.com](https://code.google.com/u/108935377450235604965/) on May 23, 2012 19:30:06_
При автообновлении флай постоянно показывает заполненную стоку прогресса. Win7 x64 flylink x64 самый последний билд с сервера Night Orion. Проявилось уже достаточно давно, некогда было написать.
**Attachment:** [Снимок.png](http://code.google.com/p/flylinkdc/issues/detail?id=758)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=758_ | priority | некорректно отображается прогресс обновления в статусной строке from on may при автообновлении флай постоянно показывает заполненную стоку прогресса flylink самый последний билд с сервера night orion проявилось уже достаточно давно некогда было написать attachment original issue | 1 |
610,706 | 18,922,072,950 | IssuesEvent | 2021-11-17 03:44:58 | Sage-Bionetworks/rocc-app | https://api.github.com/repos/Sage-Bionetworks/rocc-app | closed | Add metrics to the home page | Priority: Medium | Per user feedback (see below), we will investigate adding metrics to the Home Page.
We can start with our current set of past challenges.
Organizer Feedback
"So the first thing that I would, I would want to see as I'm seeing that page right now is somewhere indicating the community engagement of this challenge. How many challenges are currently listed in our inventory? How many users are there? How many participants do they follow that? It could be something as simple as underneath the register button with white font on top of the green color mentioning, you know, we're currently have exactly over there. We currently have a list of X challenges, you know, a thousand or 120 challenges we have in our community, 10,000 participants. We have 10 organizers. So what is, so that will show me immediately the level of engagement from the community, with this, with this platform, with this inventory."
Tasks:
- [ ] Backend - @tschaffter
- [ ] Frontend - @vpchung | 1.0 | Add metrics to the home page - Per user feedback (see below), we will investigate adding metrics to the Home Page.
We can start with our current set of past challenges.
Organizer Feedback
"So the first thing that I would, I would want to see as I'm seeing that page right now is somewhere indicating the community engagement of this challenge. How many challenges are currently listed in our inventory? How many users are there? How many participants do they follow that? It could be something as simple as underneath the register button with white font on top of the green color mentioning, you know, we're currently have exactly over there. We currently have a list of X challenges, you know, a thousand or 120 challenges we have in our community, 10,000 participants. We have 10 organizers. So what is, so that will show me immediately the level of engagement from the community, with this, with this platform, with this inventory."
Tasks:
- [ ] Backend - @tschaffter
- [ ] Frontend - @vpchung | priority | add metrics to the home page per user feedback see below we will investigate adding metrics to the home page we can start with our current set of past challenges organizer feedback so the first thing that i would i would want to see as i m seeing that page right now is somewhere indicating the community engagement of this challenge how many challenges are currently listed in our inventory how many users are there how many participants do they follow that it could be something as simple as underneath the register button with white font on top of the green color mentioning you know we re currently have exactly over there we currently have a list of x challenges you know a thousand or challenges we have in our community participants we have organizers so what is so that will show me immediately the level of engagement from the community with this with this platform with this inventory tasks backend tschaffter frontend vpchung | 1 |
68,040 | 3,283,957,033 | IssuesEvent | 2015-10-28 14:56:09 | marvinlabs/customer-area | https://api.github.com/repos/marvinlabs/customer-area | closed | "spectator" role for projects | enhancement Premium add-ons Priority - medium | From: http://wp-customerarea.com/support/topic/third-role-for-projects/
Would be nice to have a 3rd category of users on a project (managers, contributors, spectators/guests) | 1.0 | "spectator" role for projects - From: http://wp-customerarea.com/support/topic/third-role-for-projects/
Would be nice to have a 3rd category of users on a project (managers, contributors, spectators/guests) | priority | spectator role for projects from would be nice to have a category of users on a project managers contributors spectators guests | 1 |
107,904 | 4,321,744,452 | IssuesEvent | 2016-07-25 11:30:48 | richelbilderbeek/Cer2016 | https://api.github.com/repos/richelbilderbeek/Cer2016 | closed | Use nLTT::nltt_stat instead of approximation | medium priority | Sure, it was useful to add `get_nltt_values` to the `nLTT` package. But now, I could just calculate the exact nLTT statistic instead. | 1.0 | Use nLTT::nltt_stat instead of approximation - Sure, it was useful to add `get_nltt_values` to the `nLTT` package. But now, I could just calculate the exact nLTT statistic instead. | priority | use nltt nltt stat instead of approximation sure it was useful to add get nltt values to the nltt package but now i could just calculate the exact nltt statistic instead | 1 |
72,606 | 3,388,399,029 | IssuesEvent | 2015-11-29 08:19:44 | crutchcorn/stagger | https://api.github.com/repos/crutchcorn/stagger | closed | stagger --print fails with AttributeError | bug Priority Medium | ```
What steps will reproduce the problem?
1. download attached id3 file
2. run stagger --print dump2.id3
3. File "/home/michael/omg/trunk/stagger/tags.py", line 324, in getter
(track, sep, total) = frame.text[0].partition("/")
AttributeError: 'list' object has no attribute 'text'
For some reason, the 'TRCK' attribute of the stagger object is a list of
two equal frames. This might be due to a broken/nonstandard tag.
Interestingly, mutagen shows only one TRCK tag, while exfalso (which uses
mutagen) shows twice the same TRCK, as stagger does. So, this tag is
certainly broken, however the exception that occurs is not quite the right
thing to happen here.
```
Original issue reported on code.google.com by `superm...@googlemail.com` on 30 Oct 2009 at 11:36
Attachments:
* [dump2.id3](https://storage.googleapis.com/google-code-attachments/stagger/issue-37/comment-0/dump2.id3)
| 1.0 | stagger --print fails with AttributeError - ```
What steps will reproduce the problem?
1. download attached id3 file
2. run stagger --print dump2.id3
3. File "/home/michael/omg/trunk/stagger/tags.py", line 324, in getter
(track, sep, total) = frame.text[0].partition("/")
AttributeError: 'list' object has no attribute 'text'
For some reason, the 'TRCK' attribute of the stagger object is a list of
two equal frames. This might be due to a broken/nonstandard tag.
Interestingly, mutagen shows only one TRCK tag, while exfalso (which uses
mutagen) shows twice the same TRCK, as stagger does. So, this tag is
certainly broken, however the exception that occurs is not quite the right
thing to happen here.
```
Original issue reported on code.google.com by `superm...@googlemail.com` on 30 Oct 2009 at 11:36
Attachments:
* [dump2.id3](https://storage.googleapis.com/google-code-attachments/stagger/issue-37/comment-0/dump2.id3)
| priority | stagger print fails with attributeerror what steps will reproduce the problem download attached file run stagger print file home michael omg trunk stagger tags py line in getter track sep total frame text partition attributeerror list object has no attribute text for some reason the trck attribute of the stagger object is a list of two equal frames this might be due to a broken nonstandard tag interestingly mutagen shows only one trck tag while exfalso which uses mutagen shows twice the same trck as stagger does so this tag is certainly broken however the exception that occurs is not quite the right thing to happen here original issue reported on code google com by superm googlemail com on oct at attachments | 1 |
735,589 | 25,405,067,921 | IssuesEvent | 2022-11-22 14:46:10 | PHI-base/PHI5_web_display | https://api.github.com/repos/PHI-base/PHI5_web_display | closed | Display wild type host genotypes as 'wild type' in metagenotypes | medium priority | (Extracted from issue https://github.com/PHI-base/PHI5_web_display/issues/52)
In the display names for metagenotypes, hosts with no genes are displayed with the string "(wild_type)", which is an allele type instead of a genotype name. For example:
TRI5+ (wild_type) [Wild type product level] F. graminearum (PH-1) | **(wild_type)** T. aestivum (cv. Bobwhite)
The correct display is simply "wild type", followed by the scientific name.
TRI5+ (wild_type) [Wild type product level] F. graminearum (PH-1) | **wild type** T. aestivum (cv. Bobwhite)
Also note also that there should only be **one** space between the string "wild type" and the scientific name: the current display has two spaces. | 1.0 | Display wild type host genotypes as 'wild type' in metagenotypes - (Extracted from issue https://github.com/PHI-base/PHI5_web_display/issues/52)
In the display names for metagenotypes, hosts with no genes are displayed with the string "(wild_type)", which is an allele type instead of a genotype name. For example:
TRI5+ (wild_type) [Wild type product level] F. graminearum (PH-1) | **(wild_type)** T. aestivum (cv. Bobwhite)
The correct display is simply "wild type", followed by the scientific name.
TRI5+ (wild_type) [Wild type product level] F. graminearum (PH-1) | **wild type** T. aestivum (cv. Bobwhite)
Also note also that there should only be **one** space between the string "wild type" and the scientific name: the current display has two spaces. | priority | display wild type host genotypes as wild type in metagenotypes extracted from issue in the display names for metagenotypes hosts with no genes are displayed with the string wild type which is an allele type instead of a genotype name for example wild type f graminearum ph wild type t aestivum cv bobwhite the correct display is simply wild type followed by the scientific name wild type f graminearum ph wild type t aestivum cv bobwhite also note also that there should only be one space between the string wild type and the scientific name the current display has two spaces | 1 |
698,793 | 23,991,791,930 | IssuesEvent | 2022-09-14 02:19:27 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] Unique index involving custom FUNCTION results in ERRORDATA_STACK_SIZE | kind/bug area/ysql priority/medium | Jira Link: [[DB-296]](https://yugabyte.atlassian.net/browse/DB-296)
### Description
When porting commit 34ff15660b4f752e3941d661c3896fd96b1571f9 from upstream, we would see:
```
+CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;
+CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;
+WARNING: AbortSubTransaction while in DEFAULT state
+WARNING: AbortSubTransaction while in ABORT state
+WARNING: AbortSubTransaction while in ABORT state
+WARNING: AbortSubTransaction while in ABORT state
+ERROR: Illegal state: Set active sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+PANIC: ERRORDATA_STACK_SIZE exceeded
+server closed the connection unexpectedly
+ This probably means the server terminated abnormally
+ before or while processing the request.
+connection to server was lost
```
[DB-296]: https://yugabyte.atlassian.net/browse/DB-296?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] Unique index involving custom FUNCTION results in ERRORDATA_STACK_SIZE - Jira Link: [[DB-296]](https://yugabyte.atlassian.net/browse/DB-296)
### Description
When porting commit 34ff15660b4f752e3941d661c3896fd96b1571f9 from upstream, we would see:
```
+CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;
+CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;
+WARNING: AbortSubTransaction while in DEFAULT state
+WARNING: AbortSubTransaction while in ABORT state
+WARNING: AbortSubTransaction while in ABORT state
+WARNING: AbortSubTransaction while in ABORT state
+ERROR: Illegal state: Set active sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
+PANIC: ERRORDATA_STACK_SIZE exceeded
+server closed the connection unexpectedly
+ This probably means the server terminated abnormally
+ before or while processing the request.
+connection to server was lost
```
[DB-296]: https://yugabyte.atlassian.net/browse/DB-296?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | unique index involving custom function results in errordata stack size jira link description when porting commit from upstream we would see create materialized view sro index mv as select as c create unique index on sro index mv c where unwanted grant nofail warning abortsubtransaction while in default state warning abortsubtransaction while in abort state warning abortsubtransaction while in abort state warning abortsubtransaction while in abort state error illegal state set active sub transaction when not transaction is running error illegal state rollback sub transaction when not transaction is running error illegal state rollback sub transaction when not transaction is running error illegal state rollback sub transaction when not transaction is running error illegal state rollback sub transaction when not transaction is running panic errordata stack size exceeded server closed the connection unexpectedly this probably means the server terminated abnormally before or while processing the request connection to server was lost | 1 |
246,535 | 7,895,389,951 | IssuesEvent | 2018-06-29 02:58:04 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Add the ability to control all the viewing settings in the x ray image query. | Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: Normal Support Group: Any | John Fields, who is working with Steve Langer, would like more control over the image view. In particular he is interested in setting a perspective view. We should just make all the controls available.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 11/19/2014 03:51 pm
Original update: 11/21/2014 03:53 pm
Ticket number: 2069 | 1.0 | Add the ability to control all the viewing settings in the x ray image query. - John Fields, who is working with Steve Langer, would like more control over the image view. In particular he is interested in setting a perspective view. We should just make all the controls available.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 11/19/2014 03:51 pm
Original update: 11/21/2014 03:53 pm
Ticket number: 2069 | priority | add the ability to control all the viewing settings in the x ray image query john fields who is working with steve langer would like more control over the image view in particular he is interested in setting a perspective view we should just make all the controls available redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author eric brugger original creation pm original update pm ticket number | 1 |
100,243 | 4,081,742,863 | IssuesEvent | 2016-05-31 10:03:52 | nim-lang/Nim | https://api.github.com/repos/nim-lang/Nim | closed | Missing docs/*.txt causes doc2 to fail | Medium Priority | While playing with gradha's github pages documentation generator thingie, doc2 failed, with macros.nim complaining that it couldn't find ../doc/astspec.txt
````
$ ~/.babel/bin/gh_nimrod_doc_pages -c .
Generating docs for target 'master'
lib/core/macros.nim(15, 33) Error: cannot open '../doc/astspec.txt'
lib/pure/future.nim(18, 11) Error: undeclared identifier: 'newNimNode'
lib/pure/future.nim(18, 22) Error: undeclared identifier: 'nnkProcTy'
lib/pure/future.nim(18, 21) Error: expression 'newNimNode(nnkProcTy)' cannot be called
lib/pure/future.nim(19, 32) Error: undeclared identifier: 'nnkFormalParams'
lib/pure/future.nim(19, 31) Error: expression 'newNimNode(nnkFormalParams)' cannot be called
lib/pure/future.nim(21, 2) Error: undeclared identifier: 'expectKind'
lib/pure/future.nim(21, 16) Error: undeclared identifier: 'nnkIdent'
lib/pure/future.nim(21, 12) Error: expression 'expectKind(b, nnkIdent)' cannot be called
lib/pure/future.nim(24, 8) Error: undeclared identifier: 'kind'
lib/pure/future.nim(25, 5) Error: undeclared identifier: 'nnkPar'
lib/pure/future.nim(25, 5) Error: internal error: cannot generate code for: nnkPar
No stack traceback available
Error running /usr/local/bin/nimrod doc2 --verbosity:0 --index:on --out:src/iterutils.html src/iterutils.nim on 'src/iterutils.nim', compiler aborted.
All done.
$
````
- The Mac binary tar.gz includes doc/*.html files, but not the text files (and not even astspec.html); but building releases is a separate issue. Adding the .txt from github to my doc folder fixed the problem.
Macros.nim has this line:
````
## .. include:: ../doc/astspec.txt
````
dom96 asked if when you run nimrod doc2 macros.nim. Do you get a stack trace or just an error:
````bash
nimrod doc2 macros.nim
...
Hint: used config file '/Users/jason/Dotfiles/bin/nimrod-0.9.4/config/nimrod.cfg' [Conf]
Hint: used config file '/Users/jason/Dotfiles/bin/nimrod-0.9.4/config/nimdoc.cfg' [Conf]
Hint: system [Processing]
Hint: macros [Processing]
lib/core/macros.nim(15, 33) Error: cannot open '../doc/astspec.txt'
Error: unhandled exception: lib/core/macros.nim(15, 33) Error: cannot open '../doc/astspec.txt' [ERecoverableError]
$
```` | 1.0 | Missing docs/*.txt causes doc2 to fail - While playing with gradha's github pages documentation generator thingie, doc2 failed, with macros.nim complaining that it couldn't find ../doc/astspec.txt
````
$ ~/.babel/bin/gh_nimrod_doc_pages -c .
Generating docs for target 'master'
lib/core/macros.nim(15, 33) Error: cannot open '../doc/astspec.txt'
lib/pure/future.nim(18, 11) Error: undeclared identifier: 'newNimNode'
lib/pure/future.nim(18, 22) Error: undeclared identifier: 'nnkProcTy'
lib/pure/future.nim(18, 21) Error: expression 'newNimNode(nnkProcTy)' cannot be called
lib/pure/future.nim(19, 32) Error: undeclared identifier: 'nnkFormalParams'
lib/pure/future.nim(19, 31) Error: expression 'newNimNode(nnkFormalParams)' cannot be called
lib/pure/future.nim(21, 2) Error: undeclared identifier: 'expectKind'
lib/pure/future.nim(21, 16) Error: undeclared identifier: 'nnkIdent'
lib/pure/future.nim(21, 12) Error: expression 'expectKind(b, nnkIdent)' cannot be called
lib/pure/future.nim(24, 8) Error: undeclared identifier: 'kind'
lib/pure/future.nim(25, 5) Error: undeclared identifier: 'nnkPar'
lib/pure/future.nim(25, 5) Error: internal error: cannot generate code for: nnkPar
No stack traceback available
Error running /usr/local/bin/nimrod doc2 --verbosity:0 --index:on --out:src/iterutils.html src/iterutils.nim on 'src/iterutils.nim', compiler aborted.
All done.
$
````
- The Mac binary tar.gz includes doc/*.html files, but not the text files (and not even astspec.html); but building releases is a separate issue. Adding the .txt from github to my doc folder fixed the problem.
Macros.nim has this line:
````
## .. include:: ../doc/astspec.txt
````
dom96 asked if when you run nimrod doc2 macros.nim. Do you get a stack trace or just an error:
````bash
nimrod doc2 macros.nim
...
Hint: used config file '/Users/jason/Dotfiles/bin/nimrod-0.9.4/config/nimrod.cfg' [Conf]
Hint: used config file '/Users/jason/Dotfiles/bin/nimrod-0.9.4/config/nimdoc.cfg' [Conf]
Hint: system [Processing]
Hint: macros [Processing]
lib/core/macros.nim(15, 33) Error: cannot open '../doc/astspec.txt'
Error: unhandled exception: lib/core/macros.nim(15, 33) Error: cannot open '../doc/astspec.txt' [ERecoverableError]
$
```` | priority | missing docs txt causes to fail while playing with gradha s github pages documentation generator thingie failed with macros nim complaining that it couldn t find doc astspec txt babel bin gh nimrod doc pages c generating docs for target master lib core macros nim error cannot open doc astspec txt lib pure future nim error undeclared identifier newnimnode lib pure future nim error undeclared identifier nnkprocty lib pure future nim error expression newnimnode nnkprocty cannot be called lib pure future nim error undeclared identifier nnkformalparams lib pure future nim error expression newnimnode nnkformalparams cannot be called lib pure future nim error undeclared identifier expectkind lib pure future nim error undeclared identifier nnkident lib pure future nim error expression expectkind b nnkident cannot be called lib pure future nim error undeclared identifier kind lib pure future nim error undeclared identifier nnkpar lib pure future nim error internal error cannot generate code for nnkpar no stack traceback available error running usr local bin nimrod verbosity index on out src iterutils html src iterutils nim on src iterutils nim compiler aborted all done the mac binary tar gz includes doc html files but not the text files and not even astspec html but building releases is a separate issue adding the txt from github to my doc folder fixed the problem macros nim has this line include doc astspec txt asked if when you run nimrod macros nim do you get a stack trace or just an error bash nimrod macros nim hint used config file users jason dotfiles bin nimrod config nimrod cfg hint used config file users jason dotfiles bin nimrod config nimdoc cfg hint system hint macros lib core macros nim error cannot open doc astspec txt error unhandled exception lib core macros nim error cannot open doc astspec txt | 1 |
335,102 | 10,148,943,215 | IssuesEvent | 2019-08-05 14:14:09 | rstudio/websocket | https://api.github.com/repos/rstudio/websocket | closed | Allow changing max_message_size (frame size) | Difficulty: Intermediate Effort: Low Help Wanted Need Info Priority: Medium Type: Enhancement | websocket++ has a limit of 32MB for frame-sizes. This is far too low for data analytic applications.
Could you please add an option to change that from R level? Thanks. | 1.0 | Allow changing max_message_size (frame size) - websocket++ has a limit of 32MB for frame-sizes. This is far too low for data analytic applications.
Could you please add an option to change that from R level? Thanks. | priority | allow changing max message size frame size websocket has a limit of for frame sizes this is far too low for data analytic applications could you please add an option to change that from r level thanks | 1 |
173,330 | 6,523,600,110 | IssuesEvent | 2017-08-29 09:18:35 | Gapminder/ddf-validation | https://api.github.com/repos/Gapminder/ddf-validation | closed | upgrade UNEXPECTED_DATA rule | effort1: medium (half-day) priority1: urgent status: in progress type: enhancement | This rule should be sensitive to a case if csv file contains inconsistent columns with empty cells. | 1.0 | upgrade UNEXPECTED_DATA rule - This rule should be sensitive to a case if csv file contains inconsistent columns with empty cells. | priority | upgrade unexpected data rule this rule should be sensitive to a case if csv file contains inconsistent columns with empty cells | 1 |
230,028 | 7,603,447,030 | IssuesEvent | 2018-04-29 14:39:45 | AnSyn/ansyn | https://api.github.com/repos/AnSyn/ansyn | opened | Favorites functionality is affecting the next/previous overlay functionality | Bug Priority: Medium Severity: Medium | for example:
1. open https://goo.gl/F83SFa
2. navigate to next/previous overlay using the status bar buttons - all is OK
3. mark one of the overlays as favorite
4. turn on the "show only favorites" filter
5. turn off the "show only favorites" filter
6.navigate to next/previous overlay using the status bar buttons - **direction are the opposite then expected**
| 1.0 | Favorites functionality is affecting the next/previous overlay functionality - for example:
1. open https://goo.gl/F83SFa
2. navigate to next/previous overlay using the status bar buttons - all is OK
3. mark one of the overlays as favorite
4. turn on the "show only favorites" filter
5. turn off the "show only favorites" filter
6.navigate to next/previous overlay using the status bar buttons - **direction are the opposite then expected**
| priority | favorites functionality is affecting the next previous overlay functionality for example open navigate to next previous overlay using the status bar buttons all is ok mark one of the overlays as favorite turn on the show only favorites filter turn off the show only favorites filter navigate to next previous overlay using the status bar buttons direction are the opposite then expected | 1 |
345,023 | 10,351,614,102 | IssuesEvent | 2019-09-05 07:21:31 | zdnscloud/singlecloud | https://api.github.com/repos/zdnscloud/singlecloud | closed | 获取或者删除udp ingress时返回404 | bug priority: Medium | 1. 创建一个udp ingress,
2. 获取这个udp ingress,
3. 删除这个udp ingress,
结果:
获取或者删除返回404.


| 1.0 | 获取或者删除udp ingress时返回404 - 1. 创建一个udp ingress,
2. 获取这个udp ingress,
3. 删除这个udp ingress,
结果:
获取或者删除返回404.


| priority | 获取或者删除udp 创建一个udp ingress 获取这个udp ingress 删除这个udp ingress 结果 | 1 |
371,102 | 10,961,639,916 | IssuesEvent | 2019-11-27 15:45:01 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | ATT/L2CAP "deadlock" | area: Bluetooth bug has-pr priority: medium | To reproduce:
Pull this branch: https://github.com/carlescufi/zephyr/tree/gh-20938
EDIT: https://github.com/carlescufi/zephyr/tree/gh-20938-rev2 contains the patches from #20951
Build `samples/bluetooth/peripheral` and flash to an nRF52840 DK
Build `samples/bluetooth/central_hr` and flash to an nRF52840 DK
Examine the console output in the peripheral
Context: Modified sample app that is sending Write Cmd non-stop at high throughput while doing discovery and multiple control procedures over HCI (PHY, DLE, etc).
We just hit the following "deadlock" where we are waiting 30 seconds to allocate an ATT PDU (ATT MTU response). Depending on the timing one of the following symptoms happens:
1. HCI command send timeout and assert
The `k_sem_take(&sync_sem, HCI_CMD_TIMEOUT)` in `bt_hci_cmd_send_sync()` times out since it has a 10 second timeout.
This is because the TX thread is blocked (see below) and hence the HCI command is not sent at all.
```
ASSERTION FAIL [err == 0] @ ZEPHYR_BASE/subsys/bluetooth/host/hci_core.c:340
k_sem_take failed with err -11
```
2. ATT packet allocation timeout
Same problem, but in this case there is no HCI command pending to be transmitted so instead of hitting the hci_core assert we get this:
```
[00:00:30.511,016] <wrn> bt_conn: Unable to allocate buffer: timeout 30000
[00:00:30.511,047] <err> bt_att: Unable to allocate buffer for op 0x03
```
---------
The TX thread follows this sequence:
```
hci_core: process_events()
conn: bt_conn_process_tx()
conn: send_buf()
conn: send_frag()
conn: add_pending_txt()
stuck at add_pending_tx::k_fifo_get(&free_tx, K_FOREVER);
```
The system workqueue in parallel follows this sequence:
```
l2cap: l2cap_rx_process()
l2cap: l2cap_chan_recv()
att: bt_att_recv()
att: att_mtu_req()
att: bt_att_create_pdu()
l2cap: bt_l2cap_create_pdu_timeout()
conn: bt_conn_create_pdu_timeout()
stuck at bt_conn_create_pdu_timeout::net_buf_alloc(pool, timeout) // timeout = 30 seconds
```
Perhaps we should reconsider the conditional statement [here](https://github.com/zephyrproject-rtos/zephyr/blob/03ae58490c13c72fe8401b2bd70f888c784a804a/subsys/bluetooth/host/conn.c#L2301), since this "deadlock" completely breaks the application.
| 1.0 | ATT/L2CAP "deadlock" - To reproduce:
Pull this branch: https://github.com/carlescufi/zephyr/tree/gh-20938
EDIT: https://github.com/carlescufi/zephyr/tree/gh-20938-rev2 contains the patches from #20951
Build `samples/bluetooth/peripheral` and flash to an nRF52840 DK
Build `samples/bluetooth/central_hr` and flash to an nRF52840 DK
Examine the console output in the peripheral
Context: Modified sample app that is sending Write Cmd non-stop at high throughput while doing discovery and multiple control procedures over HCI (PHY, DLE, etc).
We just hit the following "deadlock" where we are waiting 30 seconds to allocate an ATT PDU (ATT MTU response). Depending on the timing one of the following symptoms happens:
1. HCI command send timeout and assert
The `k_sem_take(&sync_sem, HCI_CMD_TIMEOUT)` in `bt_hci_cmd_send_sync()` times out since it has a 10 second timeout.
This is because the TX thread is blocked (see below) and hence the HCI command is not sent at all.
```
ASSERTION FAIL [err == 0] @ ZEPHYR_BASE/subsys/bluetooth/host/hci_core.c:340
k_sem_take failed with err -11
```
2. ATT packet allocation timeout
Same problem, but in this case there is no HCI command pending to be transmitted so instead of hitting the hci_core assert we get this:
```
[00:00:30.511,016] <wrn> bt_conn: Unable to allocate buffer: timeout 30000
[00:00:30.511,047] <err> bt_att: Unable to allocate buffer for op 0x03
```
---------
The TX thread follows this sequence:
```
hci_core: process_events()
conn: bt_conn_process_tx()
conn: send_buf()
conn: send_frag()
conn: add_pending_txt()
stuck at add_pending_tx::k_fifo_get(&free_tx, K_FOREVER);
```
The system workqueue in parallel follows this sequence:
```
l2cap: l2cap_rx_process()
l2cap: l2cap_chan_recv()
att: bt_att_recv()
att: att_mtu_req()
att: bt_att_create_pdu()
l2cap: bt_l2cap_create_pdu_timeout()
conn: bt_conn_create_pdu_timeout()
stuck at bt_conn_create_pdu_timeout::net_buf_alloc(pool, timeout) // timeout = 30 seconds
```
Perhaps we should reconsider the conditional statement [here](https://github.com/zephyrproject-rtos/zephyr/blob/03ae58490c13c72fe8401b2bd70f888c784a804a/subsys/bluetooth/host/conn.c#L2301), since this "deadlock" completely breaks the application.
| priority | att deadlock to reproduce pull this branch edit contains the patches from build samples bluetooth peripheral and flash to an dk build samples bluetooth central hr and flash to an dk examine the console output in the peripheral context modified sample app that is sending write cmd non stop at high throughput while doing discovery and multiple control procedures over hci phy dle etc we just hit the following deadlock where we are waiting seconds to allocate an att pdu att mtu response depending on the timing one of the following symptoms happens hci command send timeout and assert the k sem take sync sem hci cmd timeout in bt hci cmd send sync times out since it has a second timeout this is because the tx thread is blocked see below and hence the hci command is not sent at all assertion fail zephyr base subsys bluetooth host hci core c k sem take failed with err att packet allocation timeout same problem but in this case there is no hci command pending to be transmitted so instead of hitting the hci core assert we get this bt conn unable to allocate buffer timeout bt att unable to allocate buffer for op the tx thread follows this sequence hci core process events conn bt conn process tx conn send buf conn send frag conn add pending txt stuck at add pending tx k fifo get free tx k forever the system workqueue in parallel follows this sequence rx process chan recv att bt att recv att att mtu req att bt att create pdu bt create pdu timeout conn bt conn create pdu timeout stuck at bt conn create pdu timeout net buf alloc pool timeout timeout seconds perhaps we should reconsider the conditional statement since this deadlock completely breaks the application | 1 |
395,987 | 11,699,669,408 | IssuesEvent | 2020-03-06 16:02:31 | AbdulSir/InstaPIc | https://api.github.com/repos/AbdulSir/InstaPIc | closed | Task 1.2.1: Posting pictures and viewing them on the feed page (2 story points) | Priority: Medium Task bug | -The upload option should upload photos one beneath the other
-Should solve the following bug: when 2 same photos are pasted one after another the default alt tag appears
-Should clean up the CSS so that the page looks like the others HTML pages | 1.0 | Task 1.2.1: Posting pictures and viewing them on the feed page (2 story points) - -The upload option should upload photos one beneath the other
-Should solve the following bug: when 2 same photos are pasted one after another the default alt tag appears
-Should clean up the CSS so that the page looks like the others HTML pages | priority | task posting pictures and viewing them on the feed page story points the upload option should upload photos one beneath the other should solve the following bug when same photos are pasted one after another the default alt tag appears should clean up the css so that the page looks like the others html pages | 1 |
128,050 | 5,047,809,311 | IssuesEvent | 2016-12-20 10:38:35 | Victoire/victoire | https://api.github.com/repos/Victoire/victoire | opened | When adding a criteria, better use a select menu instead of typing the choice | Priority : Medium ~EasyPick ~Enhancement | it is fastidious to fill in the form, it would be better to choose from a select menu - as there is not a lot of choices) | 1.0 | When adding a criteria, better use a select menu instead of typing the choice - it is fastidious to fill in the form, it would be better to choose from a select menu - as there is not a lot of choices) | priority | when adding a criteria better use a select menu instead of typing the choice it is fastidious to fill in the form it would be better to choose from a select menu as there is not a lot of choices | 1 |
766,686 | 26,894,095,256 | IssuesEvent | 2023-02-06 11:04:38 | netlify/next-runtime | https://api.github.com/repos/netlify/next-runtime | closed | [Bug]: middleware-manifest.json not found | type: bug priority: medium support_escalation | ### Summary
It appears that for a site that's not using Middlewares at all, we're still looking for a `middleware-manifest.json` file and this is causing errors to load the site altogether.
A user reported this on the forums: https://answers.netlify.com/t/53484
As per our sync yesterday, we're still not sure about the exact cause of this issue, so this might need some further investigation.
### Steps to reproduce
Try visiting their website here: https://naughty-varahamihira-fcffe0.netlify.app/
### A link to a reproduction repository
https://github.com/GabZanMacaw/napoleontest
### Plugin version
4.2.7
### More information about your build
- [ ] I am building using the CLI
- [ ] I am building using file-based configuration (`netlify.toml`)
### What OS are you using?
_No response_
### Your netlify.toml file
None in repo
### Your public/_redirects file
None in repo
### Your `next.config.js` file
<details>
<summary>`next.config.js`</summary>
```js
const withPWA = require("next-pwa");
module.exports = withPWA({
pwa: {
dest: "public",
},
images: {
//domains: ["localhost"],
domains: ["res.cloudinary.com"],
},
});
// module.exports = {
// images: {
// domains: ['res.cloudinary.com'],
// },
// }
```
</details>
### Builds logs (or link to your logs)
https://app.netlify.com/sites/naughty-varahamihira-fcffe0/deploys/6230a45e1f405520c81e1d71
### Function logs
<details>
<summary>Function logs</summary>
```
{
"errorType": "Error",
"errorMessage": "Cannot find module '/var/task/.next/server/middleware-manifest.json'\nRequire stack:\n- /var/task/node_modules/next/dist/server/next-server.js\n- /var/task/.netlify/functions-internal/___netlify-odb-handler/handlerUtils.js\n- /var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js\n- /var/task/___netlify-odb-handler.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"trace": [
"Error: Cannot find module '/var/task/.next/server/middleware-manifest.json'",
"Require stack:",
"- /var/task/node_modules/next/dist/server/next-server.js",
"- /var/task/.netlify/functions-internal/___netlify-odb-handler/handlerUtils.js",
"- /var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js",
"- /var/task/___netlify-odb-handler.js",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at Function.Module._resolveFilename (internal/modules/cjs/loader.js:902:15)",
" at Function.Module._load (internal/modules/cjs/loader.js:746:27)",
" at Module.require (internal/modules/cjs/loader.js:974:19)",
" at require (internal/modules/cjs/helpers.js:93:18)",
" at NextNodeServer.getMiddlewareManifest (/var/task/node_modules/next/dist/server/next-server.js:600:20)",
" at new Server (/var/task/node_modules/next/dist/server/base-server.js:128:40)",
" at new NextNodeServer (/var/task/node_modules/next/dist/server/next-server.js:74:9)",
" at getBridge (/var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js:46:28)",
" at handler (/var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js:75:46)",
" at Runtime.handler (/var/task/node_modules/@netlify/functions/dist/lib/builder.js:41:25)"
]
}
```
</details>
### .next JSON files
_No response_ | 1.0 | [Bug]: middleware-manifest.json not found - ### Summary
It appears that for a site that's not using Middlewares at all, we're still looking for a `middleware-manifest.json` file and this is causing errors to load the site altogether.
A user reported this on the forums: https://answers.netlify.com/t/53484
As per our sync yesterday, we're still not sure about the exact cause of this issue, so this might need some further investigation.
### Steps to reproduce
Try visiting their website here: https://naughty-varahamihira-fcffe0.netlify.app/
### A link to a reproduction repository
https://github.com/GabZanMacaw/napoleontest
### Plugin version
4.2.7
### More information about your build
- [ ] I am building using the CLI
- [ ] I am building using file-based configuration (`netlify.toml`)
### What OS are you using?
_No response_
### Your netlify.toml file
None in repo
### Your public/_redirects file
None in repo
### Your `next.config.js` file
<details>
<summary>`next.config.js`</summary>
```js
const withPWA = require("next-pwa");
module.exports = withPWA({
pwa: {
dest: "public",
},
images: {
//domains: ["localhost"],
domains: ["res.cloudinary.com"],
},
});
// module.exports = {
// images: {
// domains: ['res.cloudinary.com'],
// },
// }
```
</details>
### Builds logs (or link to your logs)
https://app.netlify.com/sites/naughty-varahamihira-fcffe0/deploys/6230a45e1f405520c81e1d71
### Function logs
<details>
<summary>Function logs</summary>
```
{
"errorType": "Error",
"errorMessage": "Cannot find module '/var/task/.next/server/middleware-manifest.json'\nRequire stack:\n- /var/task/node_modules/next/dist/server/next-server.js\n- /var/task/.netlify/functions-internal/___netlify-odb-handler/handlerUtils.js\n- /var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js\n- /var/task/___netlify-odb-handler.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"trace": [
"Error: Cannot find module '/var/task/.next/server/middleware-manifest.json'",
"Require stack:",
"- /var/task/node_modules/next/dist/server/next-server.js",
"- /var/task/.netlify/functions-internal/___netlify-odb-handler/handlerUtils.js",
"- /var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js",
"- /var/task/___netlify-odb-handler.js",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at Function.Module._resolveFilename (internal/modules/cjs/loader.js:902:15)",
" at Function.Module._load (internal/modules/cjs/loader.js:746:27)",
" at Module.require (internal/modules/cjs/loader.js:974:19)",
" at require (internal/modules/cjs/helpers.js:93:18)",
" at NextNodeServer.getMiddlewareManifest (/var/task/node_modules/next/dist/server/next-server.js:600:20)",
" at new Server (/var/task/node_modules/next/dist/server/base-server.js:128:40)",
" at new NextNodeServer (/var/task/node_modules/next/dist/server/next-server.js:74:9)",
" at getBridge (/var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js:46:28)",
" at handler (/var/task/.netlify/functions-internal/___netlify-odb-handler/___netlify-odb-handler.js:75:46)",
" at Runtime.handler (/var/task/node_modules/@netlify/functions/dist/lib/builder.js:41:25)"
]
}
```
</details>
### .next JSON files
_No response_ | priority | middleware manifest json not found summary it appears that for a site that s not using middlewares at all we re still looking for a middleware manifest json file and this is causing errors to load the site altogether a user reported this on the forums as per our sync yesterday we re still not sure about the exact cause of this issue so this might need some further investigation steps to reproduce try visiting their website here a link to a reproduction repository plugin version more information about your build i am building using the cli i am building using file based configuration netlify toml what os are you using no response your netlify toml file none in repo your public redirects file none in repo your next config js file next config js js const withpwa require next pwa module exports withpwa pwa dest public images domains domains module exports images domains builds logs or link to your logs function logs function logs errortype error errormessage cannot find module var task next server middleware manifest json nrequire stack n var task node modules next dist server next server js n var task netlify functions internal netlify odb handler handlerutils js n var task netlify functions internal netlify odb handler netlify odb handler js n var task netlify odb handler js n var runtime userfunction js n var runtime index js trace error cannot find module var task next server middleware manifest json require stack var task node modules next dist server next server js var task netlify functions internal netlify odb handler handlerutils js var task netlify functions internal netlify odb handler netlify odb handler js var task netlify odb handler js var runtime userfunction js var runtime index js at function module resolvefilename internal modules cjs loader js at function module load internal modules cjs loader js at module require internal modules cjs loader js at require internal modules cjs helpers js at nextnodeserver getmiddlewaremanifest var task node modules next dist server next server js at new server var task node modules next dist server base server js at new nextnodeserver var task node modules next dist server next server js at getbridge var task netlify functions internal netlify odb handler netlify odb handler js at handler var task netlify functions internal netlify odb handler netlify odb handler js at runtime handler var task node modules netlify functions dist lib builder js next json files no response | 1 |
118,911 | 4,757,562,905 | IssuesEvent | 2016-10-24 16:56:36 | geosolutions-it/geotools | https://api.github.com/repos/geosolutions-it/geotools | opened | Read collections data as complex features | C009-2016-MONGODB enhancement Priority: Medium | A source store capable of reading MongoDB collections and encode them as complex features, i.e gt-complex ComplexFeature. | 1.0 | Read collections data as complex features - A source store capable of reading MongoDB collections and encode them as complex features, i.e gt-complex ComplexFeature. | priority | read collections data as complex features a source store capable of reading mongodb collections and encode them as complex features i e gt complex complexfeature | 1 |
304,446 | 9,332,297,070 | IssuesEvent | 2019-03-28 11:51:17 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | closed | Regression if keyword uses `BuiltIn.run_keyword` internally to execute user keyword with timeouts and TRACE log level | bug priority: medium | Hello!
After updating the verision of Robot Framework, some tests started failing with this error message:
```
Variable '${x}' not found.
```
This happens, when I try to run keyword with arguments (which is in a resource file) using `BuiltIn().run_keyword` in Python code.
Here is an example code that reproduced this situation:
1. Resource file `my_resource.robot`
```robot
*** Settings ***
Documentation My resource file
*** Keywords ***
Keyword With Arguments
[Arguments] ${x} ${y} ${z}=${None}
Log many ${x} ${y} ${z}
```
2. Library `mylib.py`
```python
from robot.libraries.BuiltIn import BuiltIn
def run_keyword_from_resource_file():
BuiltIn().run_keyword("my_resource.Keyword With Arguments", 'This is x', 911)
```
3. Suite file `example.robot`
```robot
*** Settings ***
Library mylib.py
Resource my_resource.robot
Test Timeout 10 seconds
*** Test Cases ***
Variable Not Found Bug
Run Keyword From Resource File
```
**My environment:** Robot Framework 3.1.1 (Python 3.6.2 on win32), but this error is also reprodused with Robot Framework 3.1
It might be similar with issue #3025, because if I remove `Test Timeout`, the test will pass successfully. | 1.0 | Regression if keyword uses `BuiltIn.run_keyword` internally to execute user keyword with timeouts and TRACE log level - Hello!
After updating the verision of Robot Framework, some tests started failing with this error message:
```
Variable '${x}' not found.
```
This happens, when I try to run keyword with arguments (which is in a resource file) using `BuiltIn().run_keyword` in Python code.
Here is an example code that reproduced this situation:
1. Resource file `my_resource.robot`
```robot
*** Settings ***
Documentation My resource file
*** Keywords ***
Keyword With Arguments
[Arguments] ${x} ${y} ${z}=${None}
Log many ${x} ${y} ${z}
```
2. Library `mylib.py`
```python
from robot.libraries.BuiltIn import BuiltIn
def run_keyword_from_resource_file():
BuiltIn().run_keyword("my_resource.Keyword With Arguments", 'This is x', 911)
```
3. Suite file `example.robot`
```robot
*** Settings ***
Library mylib.py
Resource my_resource.robot
Test Timeout 10 seconds
*** Test Cases ***
Variable Not Found Bug
Run Keyword From Resource File
```
**My environment:** Robot Framework 3.1.1 (Python 3.6.2 on win32), but this error is also reprodused with Robot Framework 3.1
It might be similar with issue #3025, because if I remove `Test Timeout`, the test will pass successfully. | priority | regression if keyword uses builtin run keyword internally to execute user keyword with timeouts and trace log level hello after updating the verision of robot framework some tests started failing with this error message variable x not found this happens when i try to run keyword with arguments which is in a resource file using builtin run keyword in python code here is an example code that reproduced this situation resource file my resource robot robot settings documentation my resource file keywords keyword with arguments x y z none log many x y z library mylib py python from robot libraries builtin import builtin def run keyword from resource file builtin run keyword my resource keyword with arguments this is x suite file example robot robot settings library mylib py resource my resource robot test timeout seconds test cases variable not found bug run keyword from resource file my environment robot framework python on but this error is also reprodused with robot framework it might be similar with issue because if i remove test timeout the test will pass successfully | 1 |
87,111 | 3,736,997,077 | IssuesEvent | 2016-03-08 17:43:35 | nfprojects/nfengine | https://api.github.com/repos/nfprojects/nfengine | closed | nfRendererOGL4: Multiple OpenGL contexts | enhancement high priority medium | Supporting multiple OS windows requires nfRendererOGL4 to perform switching between contexts. Of course, the situation varies between platforms - resource sharing between contexts is differently implemented, plus OpenGL Extensions must be handled in a different way.
Linux seems to have a slightly easier to handle ABI, so it would be a good starting point.
**Task list:**
- [x] Add a Context singleton, which will perform following actions:
- [x] Allocate a main OpenGL Context (aka. Master Context).
- [x] Expose data needed to create per-Backbuffer contexts (aka. Slave Contexts) in a multi-platform way.
- [x] Implement the solution on Linux only, as Extensions there do not depend on Contexts.
- [x] Port the solution to Windows:
- [x] Master Context and Slave Context implementation on Windows.
- [x] OpenGL Extensions on Windows are context-dependent, so each created context should have its own OGL Extension pointer set. | 1.0 | nfRendererOGL4: Multiple OpenGL contexts - Supporting multiple OS windows requires nfRendererOGL4 to perform switching between contexts. Of course, the situation varies between platforms - resource sharing between contexts is differently implemented, plus OpenGL Extensions must be handled in a different way.
Linux seems to have a slightly easier to handle ABI, so it would be a good starting point.
**Task list:**
- [x] Add a Context singleton, which will perform following actions:
- [x] Allocate a main OpenGL Context (aka. Master Context).
- [x] Expose data needed to create per-Backbuffer contexts (aka. Slave Contexts) in a multi-platform way.
- [x] Implement the solution on Linux only, as Extensions there do not depend on Contexts.
- [x] Port the solution to Windows:
- [x] Master Context and Slave Context implementation on Windows.
- [x] OpenGL Extensions on Windows are context-dependent, so each created context should have its own OGL Extension pointer set. | priority | multiple opengl contexts supporting multiple os windows requires to perform switching between contexts of course the situation varies between platforms resource sharing between contexts is differently implemented plus opengl extensions must be handled in a different way linux seems to have a slightly easier to handle abi so it would be a good starting point task list add a context singleton which will perform following actions allocate a main opengl context aka master context expose data needed to create per backbuffer contexts aka slave contexts in a multi platform way implement the solution on linux only as extensions there do not depend on contexts port the solution to windows master context and slave context implementation on windows opengl extensions on windows are context dependent so each created context should have its own ogl extension pointer set | 1 |
53,488 | 3,040,692,489 | IssuesEvent | 2015-08-07 16:47:25 | patrickomni/omnimobileserver | https://api.github.com/repos/patrickomni/omnimobileserver | opened | Listener - accept and store location information as reported by devices | DCR Priority MEDIUM | Omni takes device that can self report location to prospective customer and installs in their asset (fridge, etc). Omni brings up web page, shows device reporting, and then clicks the carousel control to bring up the map which shows the device where it has been placed.
Req't - accept and store location information as reported by devices | 1.0 | Listener - accept and store location information as reported by devices - Omni takes device that can self report location to prospective customer and installs in their asset (fridge, etc). Omni brings up web page, shows device reporting, and then clicks the carousel control to bring up the map which shows the device where it has been placed.
Req't - accept and store location information as reported by devices | priority | listener accept and store location information as reported by devices omni takes device that can self report location to prospective customer and installs in their asset fridge etc omni brings up web page shows device reporting and then clicks the carousel control to bring up the map which shows the device where it has been placed req t accept and store location information as reported by devices | 1 |
141,044 | 5,428,401,311 | IssuesEvent | 2017-03-03 15:49:56 | Angblah/The-Comparator | https://api.github.com/repos/Angblah/The-Comparator | closed | Product Organizing in Comparison | Priority: Medium Type: Enhancement | As a user, I want to order product features that are relevant to me so that I may easily compare those features | 1.0 | Product Organizing in Comparison - As a user, I want to order product features that are relevant to me so that I may easily compare those features | priority | product organizing in comparison as a user i want to order product features that are relevant to me so that i may easily compare those features | 1 |
549,222 | 16,088,118,329 | IssuesEvent | 2021-04-26 13:44:15 | Znypr/my-thai-star | https://api.github.com/repos/Znypr/my-thai-star | opened | Bestellungs Status: Implementierung internen Status | Sprint#2 high priority medium | - Bestellungen brauchen intern einen Status-Feld welches den aktuellen Status speichert | 1.0 | Bestellungs Status: Implementierung internen Status - - Bestellungen brauchen intern einen Status-Feld welches den aktuellen Status speichert | priority | bestellungs status implementierung internen status bestellungen brauchen intern einen status feld welches den aktuellen status speichert | 1 |
28,196 | 2,700,403,855 | IssuesEvent | 2015-04-04 03:55:28 | NodineLegal/OpenLawOffice | https://api.github.com/repos/NodineLegal/OpenLawOffice | closed | Assigning responsible user should automatically assign contact | Priority : Medium Status : Confirmed Type : Enhancement | The user's contact should automatically be assigned. This information is stored in the user's profile.
In a matter, the role should match the responsibility,
In a task, the contact should be direct assigned. | 1.0 | Assigning responsible user should automatically assign contact - The user's contact should automatically be assigned. This information is stored in the user's profile.
In a matter, the role should match the responsibility,
In a task, the contact should be direct assigned. | priority | assigning responsible user should automatically assign contact the user s contact should automatically be assigned this information is stored in the user s profile in a matter the role should match the responsibility in a task the contact should be direct assigned | 1 |
55,559 | 3,073,781,017 | IssuesEvent | 2015-08-20 00:27:53 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Google MapsV2 cannot be screenshotted | bug duplicate imported Priority-Medium | _From [m...@thomaskeller.biz](https://code.google.com/u/115202190558223058672/) on May 23, 2013 04:25:04_
I'm using Robotium 4.1 that introduced GLSurfaceView snapshotting ( https://github.com/jayway/robotium/commit/fafc3db87193abbe03f94ba74f8d0ff4aeaa4c36 ). The problem is however that MapViews still popup as black areas inside screenshots, since these are not using a android.opengl.GLSurfaceView, but a private implementation that derives from the plain android.view.SurfaceView. Should snapshotting SurfaceViews work in general?
_Original issue: http://code.google.com/p/robotium/issues/detail?id=462_ | 1.0 | Google MapsV2 cannot be screenshotted - _From [m...@thomaskeller.biz](https://code.google.com/u/115202190558223058672/) on May 23, 2013 04:25:04_
I'm using Robotium 4.1 that introduced GLSurfaceView snapshotting ( https://github.com/jayway/robotium/commit/fafc3db87193abbe03f94ba74f8d0ff4aeaa4c36 ). The problem is however that MapViews still popup as black areas inside screenshots, since these are not using a android.opengl.GLSurfaceView, but a private implementation that derives from the plain android.view.SurfaceView. Should snapshotting SurfaceViews work in general?
_Original issue: http://code.google.com/p/robotium/issues/detail?id=462_ | priority | google cannot be screenshotted from on may i m using robotium that introduced glsurfaceview snapshotting the problem is however that mapviews still popup as black areas inside screenshots since these are not using a android opengl glsurfaceview but a private implementation that derives from the plain android view surfaceview should snapshotting surfaceviews work in general original issue | 1 |
823,412 | 31,019,151,278 | IssuesEvent | 2023-08-10 02:48:54 | markgravity/golang-ic | https://api.github.com/repos/markgravity/golang-ic | closed | [Integrate] As a user, I can sign out | type: feature priority: medium | ## Acceptance Criteria
- Implement a sign-out feature by using this API #12
- Show the success message when signing out successfully: #5
## Resource

| 1.0 | [Integrate] As a user, I can sign out - ## Acceptance Criteria
- Implement a sign-out feature by using this API #12
- Show the success message when signing out successfully: #5
## Resource

| priority | as a user i can sign out acceptance criteria implement a sign out feature by using this api show the success message when signing out successfully resource | 1 |
164,252 | 6,222,821,595 | IssuesEvent | 2017-07-10 10:07:45 | vmware/harbor | https://api.github.com/repos/vmware/harbor | closed | jobservice crash | kind/bug kind/customer-found priority/medium | I am using harbor 1.1.1, and since July 1, the jobservice failed to replicate the images.
I checked the jobservice.log file, and got the following errors:
```
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:88]: Job id: 18474, transition succeeded, current state: running
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:97]: Job id: 18474, next state from handler: _continue
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:109]: Job id: 18474, Continue to state: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:61]: Job id: 18474, transiting from State: running, to State: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:88]: Job id: 18455, transition succeeded, current state: running
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:97]: Job id: 18455, next state from handler: _continue
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:109]: Job id: 18455, Continue to state: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:61]: Job id: 18455, transiting from State: running, to State: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:88]: Job id: 18456, transition succeeded, current state: running
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:97]: Job id: 18456, next state from handler: _continue
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:109]: Job id: 18456, Continue to state: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:61]: Job id: 18456, transiting from State: running, to State: initialize
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:88]: Job id: 18456, transition succeeded, current state: initialize
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:118]: Job id: 18456, next state from handler: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:61]: Job id: 18456, transiting from State: initialize, to State: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:88]: Job id: 18455, transition succeeded, current state: initialize
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:118]: Job id: 18455, next state from handler: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:61]: Job id: 18455, transiting from State: initialize, to State: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x72955d]
Jul 4 16:57:23 172.18.0.1 jobservice[10132]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: goroutine 16 [running]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic(0x9c6d60, 0xc42000c060)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).enter(0xc420370060, 0xf2bca0, 0x414f50, 0xc420043a50, 0x30272089)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:195 +0x19d
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).Enter(0xc420370060, 0xc420362840, 0xa57f84, 0x5, 0xc420464658)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:179 +0x2f
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).EnterState(0xc420311c00, 0xa57f84, 0x5, 0x2, 0x2, 0x0, 0x0)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:80 +0x321
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).Start(0xc420311c00, 0xa5aa5d, 0x7)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:117 +0x23a
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).handleRepJob(0xc4203337c0, 0x4818)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:96 +0x206
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).Start.func1(0xc4203337c0)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:63 +0x204
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: created by github.com/vmware/harbor/src/jobservice/job.(*Worker).Start
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:71 +0x3f
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x72955d]
Jul 4 16:57:23 172.18.0.1 jobservice[10132]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: goroutine 50 [running]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic(0x9c6d60, 0xc42000c060)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).enter(0xc420342080, 0xf2bca0, 0x414f50, 0xc4203c7a50, 0x2603e8e7)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:195 +0x19d
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).Enter(0xc420342080, 0xc42033cc60, 0xa57f84, 0x5, 0xc42045caa8)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:179 +0x2f
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).EnterState(0xc420311c70, 0xa57f84, 0x5, 0x2, 0x2, 0x0, 0x0)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:80 +0x321
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).Start(0xc420311c70, 0xa5aa5d, 0x7)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:117 +0x23a
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).handleRepJob(0xc420333840, 0x4817)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:96 +0x206
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).Start.func1(0xc420333840)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:63 +0x204
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: created by github.com/vmware/harbor/src/jobservice/job.(*Worker).Start
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:71 +0x3f
``` | 1.0 | jobservice crash - I am using harbor 1.1.1, and since July 1, the jobservice failed to replicate the images.
I checked the jobservice.log file, and got the following errors:
```
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:88]: Job id: 18474, transition succeeded, current state: running
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:97]: Job id: 18474, next state from handler: _continue
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:109]: Job id: 18474, Continue to state: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:61]: Job id: 18474, transiting from State: running, to State: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:88]: Job id: 18455, transition succeeded, current state: running
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:97]: Job id: 18455, next state from handler: _continue
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:109]: Job id: 18455, Continue to state: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:61]: Job id: 18455, transiting from State: running, to State: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:88]: Job id: 18456, transition succeeded, current state: running
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:97]: Job id: 18456, next state from handler: _continue
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:109]: Job id: 18456, Continue to state: initialize
Jul 4 16:57:22 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:22Z [DEBUG] [statemachine.go:61]: Job id: 18456, transiting from State: running, to State: initialize
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:88]: Job id: 18456, transition succeeded, current state: initialize
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:118]: Job id: 18456, next state from handler: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:61]: Job id: 18456, transiting from State: initialize, to State: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:88]: Job id: 18455, transition succeeded, current state: initialize
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:118]: Job id: 18455, next state from handler: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: 2017-07-04T08:57:23Z [DEBUG] [statemachine.go:61]: Job id: 18455, transiting from State: initialize, to State: check
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x72955d]
Jul 4 16:57:23 172.18.0.1 jobservice[10132]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: goroutine 16 [running]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic(0x9c6d60, 0xc42000c060)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).enter(0xc420370060, 0xf2bca0, 0x414f50, 0xc420043a50, 0x30272089)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:195 +0x19d
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).Enter(0xc420370060, 0xc420362840, 0xa57f84, 0x5, 0xc420464658)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:179 +0x2f
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).EnterState(0xc420311c00, 0xa57f84, 0x5, 0x2, 0x2, 0x0, 0x0)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:80 +0x321
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).Start(0xc420311c00, 0xa5aa5d, 0x7)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:117 +0x23a
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).handleRepJob(0xc4203337c0, 0x4818)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:96 +0x206
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).Start.func1(0xc4203337c0)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:63 +0x204
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: created by github.com/vmware/harbor/src/jobservice/job.(*Worker).Start
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:71 +0x3f
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x72955d]
Jul 4 16:57:23 172.18.0.1 jobservice[10132]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: goroutine 50 [running]:
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: panic(0x9c6d60, 0xc42000c060)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).enter(0xc420342080, 0xf2bca0, 0x414f50, 0xc4203c7a50, 0x2603e8e7)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:195 +0x19d
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/replication.(*Checker).Enter(0xc420342080, 0xc42033cc60, 0xa57f84, 0x5, 0xc42045caa8)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/replication/transfer.go:179 +0x2f
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).EnterState(0xc420311c70, 0xa57f84, 0x5, 0x2, 0x2, 0x0, 0x0)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:80 +0x321
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*SM).Start(0xc420311c70, 0xa5aa5d, 0x7)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/statemachine.go:117 +0x23a
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).handleRepJob(0xc420333840, 0x4817)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:96 +0x206
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: github.com/vmware/harbor/src/jobservice/job.(*Worker).Start.func1(0xc420333840)
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:63 +0x204
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: created by github.com/vmware/harbor/src/jobservice/job.(*Worker).Start
Jul 4 16:57:23 172.18.0.1 jobservice[10132]: #011/go/src/github.com/vmware/harbor/src/jobservice/job/workerpool.go:71 +0x3f
``` | priority | jobservice crash i am using harbor and since july the jobservice failed to replicate the images i checked the jobservice log file and got the following errors jul jobservice job id transition succeeded current state running jul jobservice job id next state from handler continue jul jobservice job id continue to state initialize jul jobservice job id transiting from state running to state initialize jul jobservice job id transition succeeded current state running jul jobservice job id next state from handler continue jul jobservice job id continue to state initialize jul jobservice job id transiting from state running to state initialize jul jobservice job id transition succeeded current state running jul jobservice job id next state from handler continue jul jobservice job id continue to state initialize jul jobservice job id transiting from state running to state initialize jul jobservice job id transition succeeded current state initialize jul jobservice job id next state from handler check jul jobservice job id transiting from state initialize to state check jul jobservice job id transition succeeded current state initialize jul jobservice job id next state from handler check jul jobservice job id transiting from state initialize to state check jul jobservice panic runtime error invalid memory address or nil pointer dereference jul jobservice jul jobservice jul jobservice goroutine jul jobservice panic jul jobservice usr local go src runtime panic go jul jobservice github com vmware harbor src jobservice replication checker enter jul jobservice go src github com vmware harbor src jobservice replication transfer go jul jobservice github com vmware harbor src jobservice replication checker enter jul jobservice go src github com vmware harbor src jobservice replication transfer go jul jobservice github com vmware harbor src jobservice job sm enterstate jul jobservice go src github com vmware harbor src jobservice job statemachine go jul jobservice github com vmware harbor src jobservice job sm start jul jobservice go src github com vmware harbor src jobservice job statemachine go jul jobservice github com vmware harbor src jobservice job worker handlerepjob jul jobservice go src github com vmware harbor src jobservice job workerpool go jul jobservice github com vmware harbor src jobservice job worker start jul jobservice go src github com vmware harbor src jobservice job workerpool go jul jobservice created by github com vmware harbor src jobservice job worker start jul jobservice go src github com vmware harbor src jobservice job workerpool go jul jobservice panic runtime error invalid memory address or nil pointer dereference jul jobservice jul jobservice jul jobservice goroutine jul jobservice panic jul jobservice usr local go src runtime panic go jul jobservice github com vmware harbor src jobservice replication checker enter jul jobservice go src github com vmware harbor src jobservice replication transfer go jul jobservice github com vmware harbor src jobservice replication checker enter jul jobservice go src github com vmware harbor src jobservice replication transfer go jul jobservice github com vmware harbor src jobservice job sm enterstate jul jobservice go src github com vmware harbor src jobservice job statemachine go jul jobservice github com vmware harbor src jobservice job sm start jul jobservice go src github com vmware harbor src jobservice job statemachine go jul jobservice github com vmware harbor src jobservice job worker handlerepjob jul jobservice go src github com vmware harbor src jobservice job workerpool go jul jobservice github com vmware harbor src jobservice job worker start jul jobservice go src github com vmware harbor src jobservice job workerpool go jul jobservice created by github com vmware harbor src jobservice job worker start jul jobservice go src github com vmware harbor src jobservice job workerpool go | 1 |
49,312 | 3,001,928,823 | IssuesEvent | 2015-07-24 14:30:02 | jayway/powermock | https://api.github.com/repos/jayway/powermock | closed | Mockito @InjectMocks annotations on superclasses of test class are ignored | bug imported Priority-Medium | _From [ewald.s...@gmail.com](https://code.google.com/u/100048596282756017649/) on August 19, 2011 21:38:07_
PowerMock does not support '@InjectMocks' annotations on superclasses of a testclass. This feature has been improved in Mockito 1.9.0 and is particularly useful for testing applications that make extensive use of Dependency Injection.
Here is an example that demonstrates the problem (it throws a NullPointerException):
interface Something {
void invoke();
}
class ObjectUnderTest {
private final Something _something;
ObjectUnderTest(final Something something) {
_something = something;
}
void performOperation() {
_something.invoke();
}
}
@Ignore
public abstract class BaseTest {
@Mock
Something _something;
@InjectMocks
ObjectUnderTest _objectUnderTest;
@RunWith(PowerMockRunner.class)
public static class PerformOperationTest extends BaseTest {
@Test
public void performsTheOperation() {
_objectUnderTest.performOperation();
verify(_something).invoke();
}
}
}
**Attachment:** [powermock-superclass-injectmocks.patch](http://code.google.com/p/powermock/issues/detail?id=343)
_Original issue: http://code.google.com/p/powermock/issues/detail?id=343_ | 1.0 | Mockito @InjectMocks annotations on superclasses of test class are ignored - _From [ewald.s...@gmail.com](https://code.google.com/u/100048596282756017649/) on August 19, 2011 21:38:07_
PowerMock does not support '@InjectMocks' annotations on superclasses of a testclass. This feature has been improved in Mockito 1.9.0 and is particularly useful for testing applications that make extensive use of Dependency Injection.
Here is an example that demonstrates the problem (it throws a NullPointerException):
interface Something {
void invoke();
}
class ObjectUnderTest {
private final Something _something;
ObjectUnderTest(final Something something) {
_something = something;
}
void performOperation() {
_something.invoke();
}
}
@Ignore
public abstract class BaseTest {
@Mock
Something _something;
@InjectMocks
ObjectUnderTest _objectUnderTest;
@RunWith(PowerMockRunner.class)
public static class PerformOperationTest extends BaseTest {
@Test
public void performsTheOperation() {
_objectUnderTest.performOperation();
verify(_something).invoke();
}
}
}
**Attachment:** [powermock-superclass-injectmocks.patch](http://code.google.com/p/powermock/issues/detail?id=343)
_Original issue: http://code.google.com/p/powermock/issues/detail?id=343_ | priority | mockito injectmocks annotations on superclasses of test class are ignored from on august powermock does not support injectmocks annotations on superclasses of a testclass this feature has been improved in mockito and is particularly useful for testing applications that make extensive use of dependency injection here is an example that demonstrates the problem it throws a nullpointerexception interface something void invoke class objectundertest private final something something objectundertest final something something something something void performoperation something invoke ignore public abstract class basetest mock something something injectmocks objectundertest objectundertest runwith powermockrunner class public static class performoperationtest extends basetest test public void performstheoperation objectundertest performoperation verify something invoke attachment original issue | 1 |
52,665 | 3,025,982,837 | IssuesEvent | 2015-08-03 12:34:19 | GLolol/PyLink | https://api.github.com/repos/GLolol/PyLink | opened | [discussion] Warn users when messaging a channel they're not in / a user without a shared channel | discussion enhancement priority:medium relay | Note: The previous behavior that did this was buggy (caused stray messages to be sent), and thus disabled in 3646930d34cfa18e5d2a6299537e627830105921.
#### Cases to check for
- [ ] User messages a `-n` channel that they're not in. This is particularly finicky: since PyLink only spawns clients for users on shared channels, the sender might not be represented yet on every network; they might have a relay client for one network and not another. For example:
- Network1, network2, and network3 are in a relay.
- All 3 networks share `#channel1`, which is `-n`, but only network2 and network3 share `#channel2`.
- Someone joins `#channel2` on network3 and speaks in `#channel1` (without joining). The message is forwarded to `#channel1` on network2 as `someone/network3`, but dropped entirely on network1 because there is no client there representing the user!
- So, what's the best way of handling this scenario? (**discuss**)
- Drop relay messages to `-n` channels entirely if the sender is not in the target channel, and warn the sender that relay will not receive its messages. **Pitfalls:** the message gets sent to all local users, but not any remote users, which might be confusing.
- Spawn a client for the sender, and quit them right afterwards. **Pitfalls:** wastes a perfectly good UID; more prone to desyncs.
- [ ] User messages forces a message to a `+n` channel. Should be handled like the above case. (is this even possible on most IRCds?)
- [ ] User messages a `-n` channel that they *are* in. Handle this normally.
- [ ] User messages a user that they're not in a common channel with. Warn the sender.
- [ ] User messages a user that they *are* in a common channel with. Handle this normally.
- [ ] User sends a message to `@#channel` or similar. We need to hack around this so it's treated as a channel and not a nick!
- **Investigate:** do IRCds allow using this syntax when the sender isn't in the target channel (even if it is `-n`)?
- What if the mode prefix isn't supported on a certain network? e.g. should `%#channel` be coerced to `@#channel` since it's the next highest prefix?
- [ ] **Make sure sending a message to a channel you're in never triggers a warning!** The old behavior really messed this up. Check the following cases:
- [ ] When a channel has relays, but all the linked networks are disconnected (no connected relays).
- [ ] When a channel is CREATEd, but no networks have linked to it yet.
Note: The previous behavior that did this was buggy (caused stray messages to be sent), and thus disabled in 3646930d34cfa18e5d2a6299537e627830105921.
#### Cases to check for
- [ ] User messages a `-n` channel that they're not in. This is particularly finicky: since PyLink only spawns clients for users on shared channels, the sender might not be represented yet on every network; they might have a relay client for one network and not another. For example:
- Network1, network2, and network3 are in a relay.
- All 3 networks share `#channel1`, which is `-n`, but only network2 and network3 share `#channel2`.
- Someone joins `#channel2` on network3 and speaks in `#channel1` (without joining). The message is forwarded to `#channel1` on network2 as `someone/network3`, but dropped entirely on network1 because there is no client there representing the user!
- So, what's the best way of handling this scenario? (**discuss**)
- Drop relay messages to `-n` channels entirely if the sender is not in the target channel, and warn the sender that relay will not receive its messages. **Pitfalls:** the message gets sent to all local users, but not any remote users, which might be confusing.
- Spawn a client for the sender, and quit them right afterwards. **Pitfalls:** wastes a perfectly good UID; more prone to desyncs.
- [ ] User messages forces a message to a `+n` channel. Should be handled like the above case. (is this even possible on most IRCds?)
- [ ] User messages a `-n` channel that they *are* in. Handle this normally.
- [ ] User messages a user that they're not in a common channel with. Warn the sender.
- [ ] User messages a user that they *are* in a common channel with. Handle this normally.
- [ ] User sends a message to `@#channel` or similar. We need to hack around this so it's treated as a channel and not a nick!
- **Investigate:** do IRCds allow using this syntax when the sender isn't in the target channel (even if it is `-n`)?
- What if the mode prefix isn't supported on a certain network? e.g. should `%#channel` be coerced to `@#channel` since it's the next highest prefix?
- [ ] **Make sure sending a message to a channel you're in never triggers a warning!** The old behavior really messed this up. Check the following cases:
- [ ] When a channel has relays, but all the linked networks are disconnected (no connected relays).
- [ ] When a channel is CREATEd, but no networks have linked to it yet. | 1.0 | [discussion] Warn users when messaging a channel they're not in / a user without a shared channel - Note: The previous behavior that did this was buggy (caused stray messages to be sent), and thus disabled in 3646930d34cfa18e5d2a6299537e627830105921.
#### Cases to check for
- [ ] User messages a `-n` channel that they're not in. This is particularly finicky: since PyLink only spawns clients for users on shared channels, the sender might not be represented yet on every network; they might have a relay client for one network and not another. For example:
- Network1, network2, and network3 are in a relay.
- All 3 networks share `#channel1`, which is `-n`, but only network2 and network3 share `#channel2`.
- Someone joins `#channel2` on network3 and speaks in `#channel1` (without joining). The message is forwarded to `#channel1` on network2 as `someone/network3`, but dropped entirely on network1 because there is no client there representing the user!
- So, what's the best way of handling this scenario? (**discuss**)
- Drop relay messages to `-n` channels entirely if the sender is not in the target channel, and warn the sender that relay will not receive its messages. **Pitfalls:** the message gets sent to all local users, but not any remote users, which might be confusing.
- Spawn a client for the sender, and quit them right afterwards. **Pitfalls:** wastes a perfectly good UID; more prone to desyncs.
- [ ] User messages forces a message to a `+n` channel. Should be handled like the above case. (is this even possible on most IRCds?)
- [ ] User messages a `-n` channel that they *are* in. Handle this normally.
- [ ] User messages a user that they're not in a common channel with. Warn the sender.
- [ ] User messages a user that they *are* in a common channel with. Handle this normally.
- [ ] User sends a message to `@#channel` or similar. We need to hack around this so it's treated as a channel and not a nick!
- **Investigate:** do IRCds allow using this syntax when the sender isn't in the target channel (even if it is `-n`)?
- What if the mode prefix isn't supported on a certain network? e.g. should `%#channel` be coerced to `@#channel` since it's the next highest prefix?
- [ ] **Make sure sending a message to a channel you're in never triggers a warning!** The old behavior really messed this up. Check the following cases:
- [ ] When a channel has relays, but all the linked networks are disconnected (no connected relays).
- [ ] When a channel is CREATEd, but no networks have linked to it yet.
Note: The previous behavior that did this was buggy (caused stray messages to be sent), and thus disabled in 3646930d34cfa18e5d2a6299537e627830105921.
#### Cases to check for
- [ ] User messages a `-n` channel that they're not in. This is particularly finicky: since PyLink only spawns clients for users on shared channels, the sender might not be represented yet on every network; they might have a relay client for one network and not another. For example:
- Network1, network2, and network3 are in a relay.
- All 3 networks share `#channel1`, which is `-n`, but only network2 and network3 share `#channel2`.
- Someone joins `#channel2` on network3 and speaks in `#channel1` (without joining). The message is forwarded to `#channel1` on network2 as `someone/network3`, but dropped entirely on network1 because there is no client there representing the user!
- So, what's the best way of handling this scenario? (**discuss**)
- Drop relay messages to `-n` channels entirely if the sender is not in the target channel, and warn the sender that relay will not receive its messages. **Pitfalls:** the message gets sent to all local users, but not any remote users, which might be confusing.
- Spawn a client for the sender, and quit them right afterwards. **Pitfalls:** wastes a perfectly good UID; more prone to desyncs.
- [ ] User messages forces a message to a `+n` channel. Should be handled like the above case. (is this even possible on most IRCds?)
- [ ] User messages a `-n` channel that they *are* in. Handle this normally.
- [ ] User messages a user that they're not in a common channel with. Warn the sender.
- [ ] User messages a user that they *are* in a common channel with. Handle this normally.
- [ ] User sends a message to `@#channel` or similar. We need to hack around this so it's treated as a channel and not a nick!
- **Investigate:** do IRCds allow using this syntax when the sender isn't in the target channel (even if it is `-n`)?
- What if the mode prefix isn't supported on a certain network? e.g. should `%#channel` be coerced to `@#channel` since it's the next highest prefix?
- [ ] **Make sure sending a message to a channel you're in never triggers a warning!** The old behavior really messed this up. Check the following cases:
- [ ] When a channel has relays, but all the linked networks are disconnected (no connected relays).
- [ ] When a channel is CREATEd, but no networks have linked to it yet. | priority | warn users when messaging a channel they re not in a user without a shared channel note the previous behavior that did this was buggy caused stray messages to be sent and thus disabled in cases to check for user messages a n channel that they re not in this is particularly finicky since pylink only spawns clients for users on shared channels the sender might not be represented yet on every network they might have a relay client for one network and not another for example and are in a relay all networks share which is n but only and share someone joins on and speaks in without joining the message is forwarded to on as someone but dropped entirely on because there is no client there representing the user so what s the best way of handling this scenario discuss drop relay messages to n channels entirely if the sender is not in the target channel and warn the sender that relay will not receive its messages pitfalls the message gets sent to all local users but not any remote users which might be confusing spawn a client for the sender and quit them right afterwards pitfalls wastes a perfectly good uid more prone to desyncs user messages forces a message to a n channel should be handled like the above case is this even possible on most ircds user messages a n channel that they are in handle this normally user messages a user that they re not in a common channel with warn the sender user messages a user that they are in a common channel with handle this normally user sends a message to channel or similar we need to hack around this so it s treated as a channel and not a nick investigate do ircds allow using this syntax when the sender isn t in the target channel even if it is n what if the mode prefix isn t supported on a certain network e g should channel be coerced to channel since it s the next highest prefix make sure sending a message to a channel you re in never triggers a warning the old behavior really messed this up check the following cases when a channel has relays but all the linked networks are disconnected no connected relays when a channel is created but no networks have linked to it yet note the previous behavior that did this was buggy caused stray messages to be sent and thus disabled in cases to check for user messages a n channel that they re not in this is particularly finicky since pylink only spawns clients for users on shared channels the sender might not be represented yet on every network they might have a relay client for one network and not another for example and are in a relay all networks share which is n but only and share someone joins on and speaks in without joining the message is forwarded to on as someone but dropped entirely on because there is no client there representing the user so what s the best way of handling this scenario discuss drop relay messages to n channels entirely if the sender is not in the target channel and warn the sender that relay will not receive its messages pitfalls the message gets sent to all local users but not any remote users which might be confusing spawn a client for the sender and quit them right afterwards pitfalls wastes a perfectly good uid more prone to desyncs user messages forces a message to a n channel should be handled like the above case is this even possible on most ircds user messages a n channel that they are in handle this normally user messages a user that they re not in a common channel with warn the sender user messages a user that they are in a common channel with handle this normally user sends a message to channel or similar we need to hack around this so it s treated as a channel and not a nick investigate do ircds allow using this syntax when the sender isn t in the target channel even if it is n what if the mode prefix isn t supported on a certain network e g should channel be coerced to channel since it s the next highest prefix make sure sending a message to a channel you re in never triggers a warning the old behavior really messed this up check the following cases when a channel has relays but all the linked networks are disconnected no connected relays when a channel is created but no networks have linked to it yet | 1 |
294,502 | 9,029,932,578 | IssuesEvent | 2019-02-08 01:07:30 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | Upgrade to latest Doxygen | priority: medium team: kitware type: feature request | The latest Doxygen (1.8.15) has a few bugfixes that end up being very nice (in particular, using declarations for class methods).
Homebrew already has 1.8.15, and in general seems like it will track the latest releases like it usually does. Ubuntu is stuck on 1.8.13 (even in > 18.04). We should provide a binary image for latest on Ubuntu, so that we can have nice things. | 1.0 | Upgrade to latest Doxygen - The latest Doxygen (1.8.15) has a few bugfixes that end up being very nice (in particular, using declarations for class methods).
Homebrew already has 1.8.15, and in general seems like it will track the latest releases like it usually does. Ubuntu is stuck on 1.8.13 (even in > 18.04). We should provide a binary image for latest on Ubuntu, so that we can have nice things. | priority | upgrade to latest doxygen the latest doxygen has a few bugfixes that end up being very nice in particular using declarations for class methods homebrew already has and in general seems like it will track the latest releases like it usually does ubuntu is stuck on even in we should provide a binary image for latest on ubuntu so that we can have nice things | 1 |
79,661 | 3,538,416,787 | IssuesEvent | 2016-01-18 09:48:48 | Metaswitch/sprout | https://api.github.com/repos/Metaswitch/sprout | closed | Sprout adds wrong header field parameter for orig-cdiv | bug cat:easy medium-priority | See 3GPP TS 29.229. (http://www.etsi.org/deliver/etsi_ts/124200_124299/124229/12.08.00_60/ts_124229v120800p.pdf page 230)
Before sprout invokes ASs in the orig-cdiv session case, it adds this information incorrectly to the P-Served-User header.
Currently, sprout does:
P-Served-User: <sip:user@domain>;sescase=orig-cdiv
sprout should do:
P-Served-User: <sip:user@domain>;orig-cdiv | 1.0 | Sprout adds wrong header field parameter for orig-cdiv - See 3GPP TS 29.229. (http://www.etsi.org/deliver/etsi_ts/124200_124299/124229/12.08.00_60/ts_124229v120800p.pdf page 230)
Before sprout invokes ASs in the orig-cdiv session case, it adds this information incorrectly to the P-Served-User header.
Currently, sprout does:
P-Served-User: <sip:user@domain>;sescase=orig-cdiv
sprout should do:
P-Served-User: <sip:user@domain>;orig-cdiv | priority | sprout adds wrong header field parameter for orig cdiv see ts page before sprout invokes ass in the orig cdiv session case it adds this information incorrectly to the p served user header currently sprout does p served user sescase orig cdiv sprout should do p served user orig cdiv | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.