Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
85,841 | 3,699,325,679 | IssuesEvent | 2016-02-28 22:05:32 | Chuppy21/projekktor-zwei | https://api.github.com/repos/Chuppy21/projekktor-zwei | closed | add "fast forward" and "backward" buttons | auto-migrated Priority-Medium Type-Enhancement | ```
... and implement a corresponding functionality of course.
```
Original issue reported on code.google.com by `frankygh...@googlemail.com` on 28 May 2010 at 11:02 | 1.0 | add "fast forward" and "backward" buttons - ```
... and implement a corresponding functionality of course.
```
Original issue reported on code.google.com by `frankygh...@googlemail.com` on 28 May 2010 at 11:02 | priority | add fast forward and backward buttons and implement a corresponding functionality of course original issue reported on code google com by frankygh googlemail com on may at | 1 |
31,004 | 2,730,831,472 | IssuesEvent | 2015-04-16 16:53:42 | chummer5a/chummer5a | https://api.github.com/repos/chummer5a/chummer5a | closed | Error in calculations if a piece of gear is allowed when availability is depending on the rating. | auto-migrated bug Priority-Medium | <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Tuesday Mar 17, 2015 at 08:56 GMT_
_Originally opened as https://github.com/chummer5a/chummer5/issues/14_
----
```
What steps will reproduce the problem?
1. Create a character.
2. Open tab [Streed Gear]
3. Open subtab [Gear]
4. Push button [Add Gear]
5. Search for 'Fake' and select 'Fake SIN[ID/Credstics]'
6. Push the rating up as far as you can.
What is the expected output? What do you see instead?
The expected output is raring 4.
The reached rating is 6.
What version of the product are you using? On what operating system?
Version 0.0.5.139
Please provide any additional information below.
SR5 94:
Keep in mind... The characters are restricted to a maximum Availability rating
of 12 and a device rating of 6.
SR5 443: Table IDENTIFICATION
Fake SIN (Rating 1-6) AVAIL: (Rating x 3)F
Rating 6 -> AVAIL: (6*3)F = 18F -> forbidden.
Raring 4 -> AVAIL: (4*3)F = 12F -> allowed.
Kind regards.
```
Original issue reported on code.google.com by `a.steenv...@vista-online.nl` on 29 Sep 2014 at 7:34
| 1.0 | Error in calculations if a piece of gear is allowed when availability is depending on the rating. - <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Tuesday Mar 17, 2015 at 08:56 GMT_
_Originally opened as https://github.com/chummer5a/chummer5/issues/14_
----
```
What steps will reproduce the problem?
1. Create a character.
2. Open tab [Streed Gear]
3. Open subtab [Gear]
4. Push button [Add Gear]
5. Search for 'Fake' and select 'Fake SIN[ID/Credstics]'
6. Push the rating up as far as you can.
What is the expected output? What do you see instead?
The expected output is raring 4.
The reached rating is 6.
What version of the product are you using? On what operating system?
Version 0.0.5.139
Please provide any additional information below.
SR5 94:
Keep in mind... The characters are restricted to a maximum Availability rating
of 12 and a device rating of 6.
SR5 443: Table IDENTIFICATION
Fake SIN (Rating 1-6) AVAIL: (Rating x 3)F
Rating 6 -> AVAIL: (6*3)F = 18F -> forbidden.
Raring 4 -> AVAIL: (4*3)F = 12F -> allowed.
Kind regards.
```
Original issue reported on code.google.com by `a.steenv...@vista-online.nl` on 29 Sep 2014 at 7:34
| priority | error in calculations if a piece of gear is allowed when availability is depending on the rating issue by tuesday mar at gmt originally opened as what steps will reproduce the problem create a character open tab open subtab push button search for fake and select fake sin push the rating up as far as you can what is the expected output what do you see instead the expected output is raring the reached rating is what version of the product are you using on what operating system version please provide any additional information below keep in mind the characters are restricted to a maximum availability rating of and a device rating of table identification fake sin rating avail rating x f rating avail f forbidden raring avail f allowed kind regards original issue reported on code google com by a steenv vista online nl on sep at | 1 |
40,719 | 2,868,938,147 | IssuesEvent | 2015-06-05 22:04:27 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | listDir() can return file paths that are not within the given directory's path | bug Fixed Priority-Medium | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7346_
----
When you use listDir() to walk a directory, it returns paths that are the real paths of the contents, after symlink traversal. If the directory path itself uses a symlink, this means you can get paths that seem to not be within the directory.
For example:
listDir('/tmp/temp_dir1_pYa9UG/myapp/lib')
returns:
[/private/tmp/temp_dir1_pYa9UG/myapp/lib/src]
This is at least true on Mac. Not sure about other OS's. For M2, I'm going to make a narrowly targeted fix in the one place where this causes a bug (#7330), but we should do something directly in io.dart when we have a little more time.
I think the cleanest fix is to:
1. Have listDir() get the real path of the directory: new File(dir).fullPathSync();
2. Return the resulting paths relative to that.
| 1.0 | listDir() can return file paths that are not within the given directory's path - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7346_
----
When you use listDir() to walk a directory, it returns paths that are the real paths of the contents, after symlink traversal. If the directory path itself uses a symlink, this means you can get paths that seem to not be within the directory.
For example:
listDir('/tmp/temp_dir1_pYa9UG/myapp/lib')
returns:
[/private/tmp/temp_dir1_pYa9UG/myapp/lib/src]
This is at least true on Mac. Not sure about other OS's. For M2, I'm going to make a narrowly targeted fix in the one place where this causes a bug (#7330), but we should do something directly in io.dart when we have a little more time.
I think the cleanest fix is to:
1. Have listDir() get the real path of the directory: new File(dir).fullPathSync();
2. Return the resulting paths relative to that.
| priority | listdir can return file paths that are not within the given directory s path issue by originally opened as dart lang sdk when you use listdir to walk a directory it returns paths that are the real paths of the contents after symlink traversal if the directory path itself uses a symlink this means you can get paths that seem to not be within the directory for example listdir tmp temp myapp lib returns this is at least true on mac not sure about other os s for i m going to make a narrowly targeted fix in the one place where this causes a bug but we should do something directly in io dart when we have a little more time i think the cleanest fix is to have listdir get the real path of the directory new file dir fullpathsync return the resulting paths relative to that | 1 |
661,870 | 22,092,957,865 | IssuesEvent | 2022-06-01 07:44:00 | Adyen/adyen-magento2 | https://api.github.com/repos/Adyen/adyen-magento2 | closed | Credit card payment error TypeError: paymentMethodsResponse.paymentMethodsExtraDetails.card is undefined | Bug report Priority: medium Confirmed | **Describe the bug**
Credit card payment method is not usable
**To Reproduce**
Steps to reproduce the behavior:
1 Add a product to the card
2. go to checkout stept 2
3. open dev console
4. see error TypeError: paymentMethodsResponse.paymentMethodsExtraDetails.card is undefined
**Magento version**
Adobe Commerce 2.4.4
**Plugin version**
adyen/module-payment 8.2.3
**Screenshots**

| 1.0 | Credit card payment error TypeError: paymentMethodsResponse.paymentMethodsExtraDetails.card is undefined - **Describe the bug**
Credit card payment method is not usable
**To Reproduce**
Steps to reproduce the behavior:
1 Add a product to the card
2. go to checkout stept 2
3. open dev console
4. see error TypeError: paymentMethodsResponse.paymentMethodsExtraDetails.card is undefined
**Magento version**
Adobe Commerce 2.4.4
**Plugin version**
adyen/module-payment 8.2.3
**Screenshots**

| priority | credit card payment error typeerror paymentmethodsresponse paymentmethodsextradetails card is undefined describe the bug credit card payment method is not usable to reproduce steps to reproduce the behavior add a product to the card go to checkout stept open dev console see error typeerror paymentmethodsresponse paymentmethodsextradetails card is undefined magento version adobe commerce plugin version adyen module payment screenshots | 1 |
241,578 | 7,817,443,759 | IssuesEvent | 2018-06-13 09:03:38 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | mainField will lost content type builder model update | Good for New Contributors priority: medium status: confirmed type: bug 🐛 | **Informations**
- **Node.js version**: 9.10.1
- **npm version**:5.6.0
- **Strapi version**: 3.0.0-alpha.12.2
- **Database**: mongodb 3.6.4
- **Operating system**: macOS
**What is the current behavior?**
mainField in xxx.settings.json will lost if using admin panel to modify the same model
**Steps to reproduce the problem**
1. manually add 'mainField' in models/Profile.settings.json([documentation](https://strapi.io/documentation/guides/models.html#model-information)), since there is no entry to modify on admin panel.

2. modify field 'phoneNumber' in model 'Profile' use admin panel. For example, make 'phoneNumber' unique.

3. Result is: 'phoneNumber' saved, but 'mainField' lost.

**What is the expected behavior?**
new xxx.settings.json should copy from old xxx.settings.json, and merge new changes.
<!-- ⚠️ Make sure to browse the opened and closed issues before submit your issue. -->
| 1.0 | mainField will lost content type builder model update - **Informations**
- **Node.js version**: 9.10.1
- **npm version**:5.6.0
- **Strapi version**: 3.0.0-alpha.12.2
- **Database**: mongodb 3.6.4
- **Operating system**: macOS
**What is the current behavior?**
mainField in xxx.settings.json will lost if using admin panel to modify the same model
**Steps to reproduce the problem**
1. manually add 'mainField' in models/Profile.settings.json([documentation](https://strapi.io/documentation/guides/models.html#model-information)), since there is no entry to modify on admin panel.

2. modify field 'phoneNumber' in model 'Profile' use admin panel. For example, make 'phoneNumber' unique.

3. Result is: 'phoneNumber' saved, but 'mainField' lost.

**What is the expected behavior?**
new xxx.settings.json should copy from old xxx.settings.json, and merge new changes.
<!-- ⚠️ Make sure to browse the opened and closed issues before submit your issue. -->
| priority | mainfield will lost content type builder model update informations node js version npm version strapi version alpha database mongodb operating system macos what is the current behavior mainfield in xxx settings json will lost if using admin panel to modify the same model steps to reproduce the problem manually add mainfield in models profile settings json since there is no entry to modify on admin panel modify field phonenumber in model profile use admin panel for example make phonenumber unique result is phonenumber saved but mainfield lost what is the expected behavior new xxx settings json should copy from old xxx settings json and merge new changes | 1 |
625,229 | 19,722,821,342 | IssuesEvent | 2022-01-13 16:54:53 | carbon-design-system/carbon-for-ibm-dotcom | https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom | closed | [Universal banner] React Wrapper: Prod QA testing | Feature request package: react priority: medium QA adopter: Innovation Team | #### User Story
> As a `[user role below]`:
developer using the Carbon for IBM.com `Universal banner`
> I need to:
have a version of the component that has been tested for accessibility compliance as well as on multiple browsers and platforms
> so that I can:
be confident that my ibm.com web site users will have a good experience
#### Additional information
- [Browser Stack link](https://ibm.ent.box.com/notes/578734426612)
- [Browser Standard](https://w3.ibm.com/standards/web/browser/)
- Sanity test of tier 1 mobile browsers (desktop is covered in e2e tests)
- Sanity test of Accessibility (Voiceover)
- [Accessibility testing guidance](https://pages.github.ibm.com/IBMa/able/Test/verify/)
- [Accessibility Checklist](https://www.ibm.com/able/guidelines/ci162/accessibility_checklist.html)
- [Creating a QA bug](https://ibm.ent.box.com/notes/603242247385)
- **See the Epic (https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/6436) for the Design and Functional specs information**
- Web Components Dev issue (https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/6815)
- Once development is finished the updated code is available in the [**Web Components Canary Environment**](https://ibmdotcom-web-components-canary.mybluemix.net/?path=/story/overview-getting-started--page) for testing.
- [**Web Components canary storybook**](https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/web-components)
- [**React canary storybook**](https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/react)
- [**React wrapper storybook**](https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/web-components-react)
#### Acceptance criteria
- [ ] Accessibility testing is complete
- [ ] All manual testing is complete
- [ ] Defects are recorded | 1.0 | [Universal banner] React Wrapper: Prod QA testing - #### User Story
> As a `[user role below]`:
developer using the Carbon for IBM.com `Universal banner`
> I need to:
have a version of the component that has been tested for accessibility compliance as well as on multiple browsers and platforms
> so that I can:
be confident that my ibm.com web site users will have a good experience
#### Additional information
- [Browser Stack link](https://ibm.ent.box.com/notes/578734426612)
- [Browser Standard](https://w3.ibm.com/standards/web/browser/)
- Sanity test of tier 1 mobile browsers (desktop is covered in e2e tests)
- Sanity test of Accessibility (Voiceover)
- [Accessibility testing guidance](https://pages.github.ibm.com/IBMa/able/Test/verify/)
- [Accessibility Checklist](https://www.ibm.com/able/guidelines/ci162/accessibility_checklist.html)
- [Creating a QA bug](https://ibm.ent.box.com/notes/603242247385)
- **See the Epic (https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/6436) for the Design and Functional specs information**
- Web Components Dev issue (https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/6815)
- Once development is finished the updated code is available in the [**Web Components Canary Environment**](https://ibmdotcom-web-components-canary.mybluemix.net/?path=/story/overview-getting-started--page) for testing.
- [**Web Components canary storybook**](https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/web-components)
- [**React canary storybook**](https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/react)
- [**React wrapper storybook**](https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/web-components-react)
#### Acceptance criteria
- [ ] Accessibility testing is complete
- [ ] All manual testing is complete
- [ ] Defects are recorded | priority | react wrapper prod qa testing user story as a developer using the carbon for ibm com universal banner i need to have a version of the component that has been tested for accessibility compliance as well as on multiple browsers and platforms so that i can be confident that my ibm com web site users will have a good experience additional information sanity test of tier mobile browsers desktop is covered in tests sanity test of accessibility voiceover see the epic for the design and functional specs information web components dev issue once development is finished the updated code is available in the for testing acceptance criteria accessibility testing is complete all manual testing is complete defects are recorded | 1 |
55,549 | 3,073,655,152 | IssuesEvent | 2015-08-19 23:24:34 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | ability to wait for activity/view to fully load | bug imported invalid Priority-Medium | _From [ram...@gmail.com](https://code.google.com/u/113328738738961004047/) on May 16, 2013 08:05:55_
Is that possible to have some kind of waiter for view to be fully loaded ? i.e.
solo.clickOnButton(buttonName);
solo.waitForLoad();
// do tests.
What we can see is that on some devices that buttons are not found but they clearly there which tends me to believe there can be some timing issue ?
Thanks.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=457_ | 1.0 | ability to wait for activity/view to fully load - _From [ram...@gmail.com](https://code.google.com/u/113328738738961004047/) on May 16, 2013 08:05:55_
Is that possible to have some kind of waiter for view to be fully loaded ? i.e.
solo.clickOnButton(buttonName);
solo.waitForLoad();
// do tests.
What we can see is that on some devices that buttons are not found but they clearly there which tends me to believe there can be some timing issue ?
Thanks.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=457_ | priority | ability to wait for activity view to fully load from on may is that possible to have some kind of waiter for view to be fully loaded i e solo clickonbutton buttonname solo waitforload do tests what we can see is that on some devices that buttons are not found but they clearly there which tends me to believe there can be some timing issue thanks original issue | 1 |
218,447 | 7,331,445,291 | IssuesEvent | 2018-03-05 13:33:19 | SmartlyDressedGames/Unturned-4.x-Community | https://api.github.com/repos/SmartlyDressedGames/Unturned-4.x-Community | closed | Ability stats editor | Priority: Medium Status: Complete Type: Optimization | - [x] Default value editor more concise
- [x] Modifier value editor show +/- and color
- [x] Unit test UAbilityStatSet | 1.0 | Ability stats editor - - [x] Default value editor more concise
- [x] Modifier value editor show +/- and color
- [x] Unit test UAbilityStatSet | priority | ability stats editor default value editor more concise modifier value editor show and color unit test uabilitystatset | 1 |
364,642 | 10,771,763,008 | IssuesEvent | 2019-11-02 10:05:42 | bounswe/bounswe2019group4 | https://api.github.com/repos/bounswe/bounswe2019group4 | closed | Add Prediction Feature Backend | Back-End Priority: Medium Type: Development | We need to add a rate showed on user's profile page, calculated by their past predictions. These predictions should be also showed in tradinq equipment page. | 1.0 | Add Prediction Feature Backend - We need to add a rate showed on user's profile page, calculated by their past predictions. These predictions should be also showed in tradinq equipment page. | priority | add prediction feature backend we need to add a rate showed on user s profile page calculated by their past predictions these predictions should be also showed in tradinq equipment page | 1 |
432,397 | 12,492,208,687 | IssuesEvent | 2020-06-01 06:36:02 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1285] Web: action counting for graphs | Category: Web Priority: Medium | I see that graphs filters had changed. I can't see particular peopla statistic for example.
And. check action counting please.
I was digging, placing, pickuping... and nothing on stats.

| 1.0 | [0.9.0 staging-1285] Web: action counting for graphs - I see that graphs filters had changed. I can't see particular peopla statistic for example.
And. check action counting please.
I was digging, placing, pickuping... and nothing on stats.

| priority | web action counting for graphs i see that graphs filters had changed i can t see particular peopla statistic for example and check action counting please i was digging placing pickuping and nothing on stats | 1 |
57,526 | 3,082,702,914 | IssuesEvent | 2015-08-24 00:18:16 | magro/memcached-session-manager | https://api.github.com/repos/magro/memcached-session-manager | closed | Switch to new CouchbaseClient API | enhancement imported Milestone-1.6.4 Priority-Medium | _From [rka...@gmail.com](https://code.google.com/u/116755208212771764698/) on March 05, 2012 20:00:26_
Cpuchbase Server 1.8 (formerly known as Membase) is using a slightly different API and it's no longer part of the spymemcached API.
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=126_ | 1.0 | Switch to new CouchbaseClient API - _From [rka...@gmail.com](https://code.google.com/u/116755208212771764698/) on March 05, 2012 20:00:26_
Cpuchbase Server 1.8 (formerly known as Membase) is using a slightly different API and it's no longer part of the spymemcached API.
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=126_ | priority | switch to new couchbaseclient api from on march cpuchbase server formerly known as membase is using a slightly different api and it s no longer part of the spymemcached api original issue | 1 |
109,025 | 4,366,534,043 | IssuesEvent | 2016-08-03 14:36:57 | LearningLocker/learninglocker | https://api.github.com/repos/LearningLocker/learninglocker | closed | Articulate Storyline 2 course doesn't resume | priority:medium status:unconfirmed type:bug | **Version**
1.13.3
**Steps to reproduce the bug**
1. Create working launch link to course
2. Launch course and play through a bit
3. Close course
4. Re-launch course with same launch link
5. The course does not prompt for resume
**Expected behaviour**
Course should prompt for resume when we refresh/reload with same activity and same actor
**Actual behaviour**
Course start over again
**Additional information**
OS: Window 10 x64 with WAMP + Mongodb
Browser: Version 51.0.2704.106 m (64-bit)
Tested the same course with ScormCloud and it is working fine. But couldn't get it work with LearningLocker.
---
The difference I notice is it return {} instead of { data:"xxxx" } in the http://localhost:8081/learninglocker/public/data/xAPI/activities/state?method=GET
I am not sure if the parameter I pass to the server is correct or not.
> story.html?endpoint=http://localhost:8081/learninglocker/public/data/xAPI/&auth=Basic%20M2FjNDAwNWZhNDY4YWQ4M2Y2ZjUxMTZjN2UwMDdmMDIyMjJhZjdkNzozOTE1MzI5ODcxN2QxYjFmZTI3MzY5OGI2NWJjNzBjNzdlODUwNTFj&actor={"mbox":"mailto:xxx@xxx.com",%20"name":"Jason"}®istration=2981c910-6445-11e4-9803-0800200c9a66&activity_id=www.example.com/my-activity
Also I am not sure where to get/generate for the registration and activity_id paramenter for, I just copy from the tutorial website and use it. Is that the reason it doesn't work?
---
I suspect the `http://localhost:8081/learninglocker/public/data/xAPI/activities/state?method=GET` data return {} is due to the data in content & registration field in `documentapi ` table is empty.
So that's why it couldn't resume.
```
{
"_id" : ObjectId("57991e7d7f7759ac2600002c"),
"lrs" : ObjectId("57991beb7f7759b02100002d"),
"lrs_id" : ObjectId("57991beb7f7759b02100002d"),
"documentType" : "state",
"identId" : "resume",
"activityId" : "http://5a8SiBHMf0l_course_id",
"agent" : {
"mbox" : "mailto:xxx@xxx.com",
"name" : "Jason"
},
"registration" : null,
"updated_at" : ISODate("2016-07-27T21:31:27.000Z"),
"sha" : "AA7B6DA600F7E471DEEE83DCB06F923C6353FF65",
"content" : null,
"contentType" : "application/json",
"created_at" : ISODate("2016-07-27T20:50:05.000Z")
}
``` | 1.0 | Articulate Storyline 2 course doesn't resume - **Version**
1.13.3
**Steps to reproduce the bug**
1. Create working launch link to course
2. Launch course and play through a bit
3. Close course
4. Re-launch course with same launch link
5. The course does not prompt for resume
**Expected behaviour**
Course should prompt for resume when we refresh/reload with same activity and same actor
**Actual behaviour**
Course start over again
**Additional information**
OS: Window 10 x64 with WAMP + Mongodb
Browser: Version 51.0.2704.106 m (64-bit)
Tested the same course with ScormCloud and it is working fine. But couldn't get it work with LearningLocker.
---
The difference I notice is it return {} instead of { data:"xxxx" } in the http://localhost:8081/learninglocker/public/data/xAPI/activities/state?method=GET
I am not sure if the parameter I pass to the server is correct or not.
> story.html?endpoint=http://localhost:8081/learninglocker/public/data/xAPI/&auth=Basic%20M2FjNDAwNWZhNDY4YWQ4M2Y2ZjUxMTZjN2UwMDdmMDIyMjJhZjdkNzozOTE1MzI5ODcxN2QxYjFmZTI3MzY5OGI2NWJjNzBjNzdlODUwNTFj&actor={"mbox":"mailto:xxx@xxx.com",%20"name":"Jason"}®istration=2981c910-6445-11e4-9803-0800200c9a66&activity_id=www.example.com/my-activity
Also I am not sure where to get/generate for the registration and activity_id paramenter for, I just copy from the tutorial website and use it. Is that the reason it doesn't work?
---
I suspect the `http://localhost:8081/learninglocker/public/data/xAPI/activities/state?method=GET` data return {} is due to the data in content & registration field in `documentapi ` table is empty.
So that's why it couldn't resume.
```
{
"_id" : ObjectId("57991e7d7f7759ac2600002c"),
"lrs" : ObjectId("57991beb7f7759b02100002d"),
"lrs_id" : ObjectId("57991beb7f7759b02100002d"),
"documentType" : "state",
"identId" : "resume",
"activityId" : "http://5a8SiBHMf0l_course_id",
"agent" : {
"mbox" : "mailto:xxx@xxx.com",
"name" : "Jason"
},
"registration" : null,
"updated_at" : ISODate("2016-07-27T21:31:27.000Z"),
"sha" : "AA7B6DA600F7E471DEEE83DCB06F923C6353FF65",
"content" : null,
"contentType" : "application/json",
"created_at" : ISODate("2016-07-27T20:50:05.000Z")
}
``` | priority | articulate storyline course doesn t resume version steps to reproduce the bug create working launch link to course launch course and play through a bit close course re launch course with same launch link the course does not prompt for resume expected behaviour course should prompt for resume when we refresh reload with same activity and same actor actual behaviour course start over again additional information os window with wamp mongodb browser version m bit tested the same course with scormcloud and it is working fine but couldn t get it work with learninglocker the difference i notice is it return instead of data xxxx in the i am not sure if the parameter i pass to the server is correct or not story html endpoint also i am not sure where to get generate for the registration and activity id paramenter for i just copy from the tutorial website and use it is that the reason it doesn t work i suspect the data return is due to the data in content registration field in documentapi table is empty so that s why it couldn t resume id objectid lrs objectid lrs id objectid documenttype state identid resume activityid agent mbox mailto xxx xxx com name jason registration null updated at isodate sha content null contenttype application json created at isodate | 1 |
40,499 | 2,868,923,625 | IssuesEvent | 2015-06-05 21:59:13 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Pub should gracefully handle a Git dependency's pubspec changing name or going away | bug Fixed Priority-Medium | <a href="https://github.com/nex3"><img src="https://avatars.githubusercontent.com/u/188?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [nex3](https://github.com/nex3)**
_Originally opened as dart-lang/sdk#5241_
----
I'm not sure what it does now, but at the very least there should be tests for this case. | 1.0 | Pub should gracefully handle a Git dependency's pubspec changing name or going away - <a href="https://github.com/nex3"><img src="https://avatars.githubusercontent.com/u/188?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [nex3](https://github.com/nex3)**
_Originally opened as dart-lang/sdk#5241_
----
I'm not sure what it does now, but at the very least there should be tests for this case. | priority | pub should gracefully handle a git dependency s pubspec changing name or going away issue by originally opened as dart lang sdk i m not sure what it does now but at the very least there should be tests for this case | 1 |
830,807 | 32,024,819,551 | IssuesEvent | 2023-09-22 08:05:39 | ImTheCactus/Crow-Get-It-Game | https://api.github.com/repos/ImTheCactus/Crow-Get-It-Game | closed | Oak1 LODs aren't functioning | Priority: Medium Status: In progress Bug | Describe the bug: For the Oak1 tree model, the LOD won't accept a renderer from the model like the other models and LODs have.
Version: Crow Get It 3.5 version 0.2.0 *(To be updated on)
Steps to reproduce the behaviour: (In game) Walk far away from the oak/birch starting forest, for instance to the farm area, and you will see most of the other environmental props will disappear but Oak1 tree models will not.
Expected behaviour: After a certain distance, the Oak1 tree models should disappear for an increase of performance.
Desktop system details: OS: Windows10 | 1.0 | Oak1 LODs aren't functioning - Describe the bug: For the Oak1 tree model, the LOD won't accept a renderer from the model like the other models and LODs have.
Version: Crow Get It 3.5 version 0.2.0 *(To be updated on)
Steps to reproduce the behaviour: (In game) Walk far away from the oak/birch starting forest, for instance to the farm area, and you will see most of the other environmental props will disappear but Oak1 tree models will not.
Expected behaviour: After a certain distance, the Oak1 tree models should disappear for an increase of performance.
Desktop system details: OS: Windows10 | priority | lods aren t functioning describe the bug for the tree model the lod won t accept a renderer from the model like the other models and lods have version crow get it version to be updated on steps to reproduce the behaviour in game walk far away from the oak birch starting forest for instance to the farm area and you will see most of the other environmental props will disappear but tree models will not expected behaviour after a certain distance the tree models should disappear for an increase of performance desktop system details os | 1 |
279,645 | 8,671,455,373 | IssuesEvent | 2018-11-29 19:12:45 | SETI/pds-opus | https://api.github.com/repos/SETI/pds-opus | closed | Social sharing from detail page picks up wrong image | A-Bug B-OPUS Django Effort 2 Medium Priority TBD | Originally reported by: **lisa ballard (Bitbucket: [basilleaf](https://bitbucket.org/basilleaf), GitHub: [basilleaf](https://github.com/basilleaf))**
---
sharing an image to Pinterest from detail page doesn't work, pinterest is picking up entire gallery, probably similar for facebook etc.
---
- Bitbucket: https://bitbucket.org/ringsnode/opus2/issue/116
| 1.0 | Social sharing from detail page picks up wrong image - Originally reported by: **lisa ballard (Bitbucket: [basilleaf](https://bitbucket.org/basilleaf), GitHub: [basilleaf](https://github.com/basilleaf))**
---
sharing an image to Pinterest from detail page doesn't work, pinterest is picking up entire gallery, probably similar for facebook etc.
---
- Bitbucket: https://bitbucket.org/ringsnode/opus2/issue/116
| priority | social sharing from detail page picks up wrong image originally reported by lisa ballard bitbucket github sharing an image to pinterest from detail page doesn t work pinterest is picking up entire gallery probably similar for facebook etc bitbucket | 1 |
147,596 | 5,642,456,949 | IssuesEvent | 2017-04-06 21:10:10 | driftyco/ionic-app-scripts | https://api.github.com/repos/driftyco/ionic-app-scripts | closed | `ionic build` always exits with `0` | priority:medium | Currently, all tslint errors are treated as (kind of) warnings. I reach this conclusion by the fact that the script exits with `0` despite all those tslint errors on screen.
This way, we can't catch these in CI, because of the exit code is `0`, and there's nearly no indication of such error. We have to do a regular express grep on the complete output to determine that.
There should be an option/setting to make the script return with not zero instead.
P.S. I think this applies beyond tslint. Other compilers has command line options to treat "warnings" as "error". | 1.0 | `ionic build` always exits with `0` - Currently, all tslint errors are treated as (kind of) warnings. I reach this conclusion by the fact that the script exits with `0` despite all those tslint errors on screen.
This way, we can't catch these in CI, because of the exit code is `0`, and there's nearly no indication of such error. We have to do a regular express grep on the complete output to determine that.
There should be an option/setting to make the script return with not zero instead.
P.S. I think this applies beyond tslint. Other compilers has command line options to treat "warnings" as "error". | priority | ionic build always exits with currently all tslint errors are treated as kind of warnings i reach this conclusion by the fact that the script exits with despite all those tslint errors on screen this way we can t catch these in ci because of the exit code is and there s nearly no indication of such error we have to do a regular express grep on the complete output to determine that there should be an option setting to make the script return with not zero instead p s i think this applies beyond tslint other compilers has command line options to treat warnings as error | 1 |
452,976 | 13,062,641,871 | IssuesEvent | 2020-07-30 15:26:15 | datavisyn/tdp_core | https://api.github.com/repos/datavisyn/tdp_core | closed | RankingView: Covered tooltip of overview mode warning | priority: medium type: bug | * Release number or git hash: v9.1.0
* Web browser version and OS: Chrome 84
* Environment (local or deployed): both
### Steps to reproduce
1. Open https://ordino-daily.caleydoapp.org/
1. Open list of all genes
2. Activate Overview mode
### Observed behavior
The warning explanation is covered by the Tourdino icon as well as the button tool tip.

### Expected behavior
The tooltip should not be covered.
| 1.0 | RankingView: Covered tooltip of overview mode warning - * Release number or git hash: v9.1.0
* Web browser version and OS: Chrome 84
* Environment (local or deployed): both
### Steps to reproduce
1. Open https://ordino-daily.caleydoapp.org/
1. Open list of all genes
2. Activate Overview mode
### Observed behavior
The warning explanation is covered by the Tourdino icon as well as the button tool tip.

### Expected behavior
The tooltip should not be covered.
| priority | rankingview covered tooltip of overview mode warning release number or git hash web browser version and os chrome environment local or deployed both steps to reproduce open open list of all genes activate overview mode observed behavior the warning explanation is covered by the tourdino icon as well as the button tool tip expected behavior the tooltip should not be covered | 1 |
317,862 | 9,670,100,166 | IssuesEvent | 2019-05-21 19:02:04 | etternagame/etterna | https://api.github.com/repos/etternagame/etterna | closed | Osu beatmaps with a [Colours] section do not load | Priority: Medium Type: Bug | **Describe the bug**
Etterna does not load Osu beatmaps that contain a [Colours] section.
Manually deleting this section in each .osu file allows the beatmap to load ok.
**To Reproduce**
Steps to reproduce the behavior:
1. Attempt to load an Osu beatmap that contains a [Colours] section, e.g. https://osu.ppy.sh/beatmapsets/127305
2. The chart doesn't show up in-game.
**Expected behavior**
The chart loads and is playable in-game.
**Desktop (please complete the following information):**
- OS: Windows 10 x64
- Version 0.65.1 | 1.0 | Osu beatmaps with a [Colours] section do not load - **Describe the bug**
Etterna does not load Osu beatmaps that contain a [Colours] section.
Manually deleting this section in each .osu file allows the beatmap to load ok.
**To Reproduce**
Steps to reproduce the behavior:
1. Attempt to load an Osu beatmap that contains a [Colours] section, e.g. https://osu.ppy.sh/beatmapsets/127305
2. The chart doesn't show up in-game.
**Expected behavior**
The chart loads and is playable in-game.
**Desktop (please complete the following information):**
- OS: Windows 10 x64
- Version 0.65.1 | priority | osu beatmaps with a section do not load describe the bug etterna does not load osu beatmaps that contain a section manually deleting this section in each osu file allows the beatmap to load ok to reproduce steps to reproduce the behavior attempt to load an osu beatmap that contains a section e g the chart doesn t show up in game expected behavior the chart loads and is playable in game desktop please complete the following information os windows version | 1 |
256,186 | 8,127,027,629 | IssuesEvent | 2018-08-17 06:14:13 | codephil-columbia/typephil | https://api.github.com/repos/codephil-columbia/typephil | closed | No start next lesson option in learn | High Priority Medium Priority | Steps to reproduce: Click on next available lesson that you haven't started yet. Just add button and sangjun can do ui styling.

| 2.0 | No start next lesson option in learn - Steps to reproduce: Click on next available lesson that you haven't started yet. Just add button and sangjun can do ui styling.

| priority | no start next lesson option in learn steps to reproduce click on next available lesson that you haven t started yet just add button and sangjun can do ui styling | 1 |
343,185 | 10,326,029,733 | IssuesEvent | 2019-09-01 22:38:44 | ESAPI/esapi-java-legacy | https://api.github.com/repos/ESAPI/esapi-java-legacy | closed | exception is java.lang.NoClassDefFoundError: org.owasp.esapi.codecs.Codec | Priority-Medium bug imported | _From [alexj...@gmail.com](https://code.google.com/u/117724374125274417382/) on June 15, 2011 05:02:44_
Hi All,
My web project contains esapi-2.0GA.jar inside WEB-INF/lib folder. but still i am getting the below error. My application is running on websphere 6.1. Could anyone help me to resolve this issue.
exception is java.lang.NoClassDefFoundError: org.owasp.esapi.codecs.Codec
_Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=227_
| 1.0 | exception is java.lang.NoClassDefFoundError: org.owasp.esapi.codecs.Codec - _From [alexj...@gmail.com](https://code.google.com/u/117724374125274417382/) on June 15, 2011 05:02:44_
Hi All,
My web project contains esapi-2.0GA.jar inside WEB-INF/lib folder. but still i am getting the below error. My application is running on websphere 6.1. Could anyone help me to resolve this issue.
exception is java.lang.NoClassDefFoundError: org.owasp.esapi.codecs.Codec
_Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=227_
| priority | exception is java lang noclassdeffounderror org owasp esapi codecs codec from on june hi all my web project contains esapi jar inside web inf lib folder but still i am getting the below error my application is running on websphere could anyone help me to resolve this issue exception is java lang noclassdeffounderror org owasp esapi codecs codec original issue | 1 |
680,554 | 23,276,406,722 | IssuesEvent | 2022-08-05 07:39:50 | canonical/maas-ui | https://api.github.com/repos/canonical/maas-ui | closed | IP Address tooltip on Machines page blocks access to everything underneath and doesnt disappear until mouse-off | Priority: Medium Review: UX needed | Bug originally filed by bladernr at https://bugs.launchpad.net/bugs/1980846
When moving the mouse pointer down a list of machines, every time the pointer moves over the IP listed under a machine name, a tooltip opens that lists all the IPs associated with the machine.
This tooltip opens overtop everything underneath and:
1: opens immediately on mouseover with no delay to account for people moving the pointer down the list
2: does not close automatically *until the mouse is moved off the tooltip which disrupts the flow of movement
3: Depending on the number of IPs assigned can be quite long and can block access to multiple machine hyperlinks that are trapped underneath.
See the attached screenshot.
To resolve this, the tooltip should either
1: open to the side, not directly below
2: open after a short delay (maybe 250 or 500 ms?) to not block someone who is just moving the mouse down the list. (*IOW the behaviour should be on hover, not on mouseover). | 1.0 | IP Address tooltip on Machines page blocks access to everything underneath and doesnt disappear until mouse-off - Bug originally filed by bladernr at https://bugs.launchpad.net/bugs/1980846
When moving the mouse pointer down a list of machines, every time the pointer moves over the IP listed under a machine name, a tooltip opens that lists all the IPs associated with the machine.
This tooltip opens overtop everything underneath and:
1: opens immediately on mouseover with no delay to account for people moving the pointer down the list
2: does not close automatically *until the mouse is moved off the tooltip which disrupts the flow of movement
3: Depending on the number of IPs assigned can be quite long and can block access to multiple machine hyperlinks that are trapped underneath.
See the attached screenshot.
To resolve this, the tooltip should either
1: open to the side, not directly below
2: open after a short delay (maybe 250 or 500 ms?) to not block someone who is just moving the mouse down the list. (*IOW the behaviour should be on hover, not on mouseover). | priority | ip address tooltip on machines page blocks access to everything underneath and doesnt disappear until mouse off bug originally filed by bladernr at when moving the mouse pointer down a list of machines every time the pointer moves over the ip listed under a machine name a tooltip opens that lists all the ips associated with the machine this tooltip opens overtop everything underneath and opens immediately on mouseover with no delay to account for people moving the pointer down the list does not close automatically until the mouse is moved off the tooltip which disrupts the flow of movement depending on the number of ips assigned can be quite long and can block access to multiple machine hyperlinks that are trapped underneath see the attached screenshot to resolve this the tooltip should either open to the side not directly below open after a short delay maybe or ms to not block someone who is just moving the mouse down the list iow the behaviour should be on hover not on mouseover | 1 |
441,169 | 12,708,953,926 | IssuesEvent | 2020-06-23 11:29:07 | graknlabs/workbase | https://api.github.com/repos/graknlabs/workbase | closed | Inferred attributes are drawn multiple times because they have different IDs | priority: medium type: bug | ## Description
When querying for inferred concepts when get back nodes with randomly generated IDs.
In the case of inferred attributes we might receive 2 nodes that have the same value but different IDs, we should probably just show the attribute once.
## Reproducible Steps
Create a schema with a rule which infers an attribute and trying querying for it (probably need to fetch the same attribute using multiple queries so that the IDs of the inferred attribute will actually be different).
## Expected Output
1 attribute node with a given value
## Actual Output
2 nodes with the same type and value:

| 1.0 | Inferred attributes are drawn multiple times because they have different IDs - ## Description
When querying for inferred concepts when get back nodes with randomly generated IDs.
In the case of inferred attributes we might receive 2 nodes that have the same value but different IDs, we should probably just show the attribute once.
## Reproducible Steps
Create a schema with a rule which infers an attribute and trying querying for it (probably need to fetch the same attribute using multiple queries so that the IDs of the inferred attribute will actually be different).
## Expected Output
1 attribute node with a given value
## Actual Output
2 nodes with the same type and value:

| priority | inferred attributes are drawn multiple times because they have different ids description when querying for inferred concepts when get back nodes with randomly generated ids in the case of inferred attributes we might receive nodes that have the same value but different ids we should probably just show the attribute once reproducible steps create a schema with a rule which infers an attribute and trying querying for it probably need to fetch the same attribute using multiple queries so that the ids of the inferred attribute will actually be different expected output attribute node with a given value actual output nodes with the same type and value | 1 |
472,522 | 13,626,285,005 | IssuesEvent | 2020-09-24 10:47:07 | FAIRsharing/fairsharing.github.io | https://api.github.com/repos/FAIRsharing/fairsharing.github.io | closed | Request ownership button | Medium priority | If logged in but not a maintainer of a record then the "claim ownership" etc. button should be placed there instead.
Clicking the button should post to the maintenance_requests controller and disable the button on a successful request. | 1.0 | Request ownership button - If logged in but not a maintainer of a record then the "claim ownership" etc. button should be placed there instead.
Clicking the button should post to the maintenance_requests controller and disable the button on a successful request. | priority | request ownership button if logged in but not a maintainer of a record then the claim ownership etc button should be placed there instead clicking the button should post to the maintenance requests controller and disable the button on a successful request | 1 |
815,692 | 30,567,718,318 | IssuesEvent | 2023-07-20 19:10:26 | Loony4Logic/jamt | https://api.github.com/repos/Loony4Logic/jamt | closed | sending all logs | priority: medium | Send all logs on get request on `/logs`
it should send realtime data with previously stored logs using server side event.
> Feasibility is up for discussion! | 1.0 | sending all logs - Send all logs on get request on `/logs`
it should send realtime data with previously stored logs using server side event.
> Feasibility is up for discussion! | priority | sending all logs send all logs on get request on logs it should send realtime data with previously stored logs using server side event feasibility is up for discussion | 1 |
43,772 | 2,892,601,400 | IssuesEvent | 2015-06-15 13:55:43 | expath/xspec | https://api.github.com/repos/expath/xspec | closed | Move XSpec to GitHub | auto-migrated Priority-Medium Type-Other | ```
I suggest that XSpec should move from GoogleCode to GitHub before GoogleCode
closes:
http://google-opensource.blogspot.ie/2015/03/farewell-to-google-code.html
There's already two copies of XSpec on GitHub:
https://github.com/search?utf8=%E2%9C%93&q=code.google.com%2Fp%2Fxspec&type=Repo
sitories&ref=searchresults
I suggest that we make an 'XSpec' Organization on GitHub and migrate the code
there. The low overhead of accepting pull requests on GitHub may also mean
that we get more fixes from current non-committers for some of the outstanding
XSpec issues.
Regards,
Tony.
```
Original issue reported on code.google.com by `dev.xspec@menteithconsulting.com` on 6 Jun 2015 at 4:47 | 1.0 | Move XSpec to GitHub - ```
I suggest that XSpec should move from GoogleCode to GitHub before GoogleCode
closes:
http://google-opensource.blogspot.ie/2015/03/farewell-to-google-code.html
There's already two copies of XSpec on GitHub:
https://github.com/search?utf8=%E2%9C%93&q=code.google.com%2Fp%2Fxspec&type=Repo
sitories&ref=searchresults
I suggest that we make an 'XSpec' Organization on GitHub and migrate the code
there. The low overhead of accepting pull requests on GitHub may also mean
that we get more fixes from current non-committers for some of the outstanding
XSpec issues.
Regards,
Tony.
```
Original issue reported on code.google.com by `dev.xspec@menteithconsulting.com` on 6 Jun 2015 at 4:47 | priority | move xspec to github i suggest that xspec should move from googlecode to github before googlecode closes there s already two copies of xspec on github sitories ref searchresults i suggest that we make an xspec organization on github and migrate the code there the low overhead of accepting pull requests on github may also mean that we get more fixes from current non committers for some of the outstanding xspec issues regards tony original issue reported on code google com by dev xspec menteithconsulting com on jun at | 1 |
269,320 | 8,434,607,637 | IssuesEvent | 2018-10-17 10:41:43 | geosolutions-it/smb-app | https://api.github.com/repos/geosolutions-it/smb-app | opened | Notify user about invalid tracks | Priority: Medium ready | The very first FCM notification to be implemented is "track_validated". The payload will be initially used only to inform user that a track was rejected because of some issues (error message).
Later it will also be used to manage session items state, paving the road to session editing.
depends on #50 | 1.0 | Notify user about invalid tracks - The very first FCM notification to be implemented is "track_validated". The payload will be initially used only to inform user that a track was rejected because of some issues (error message).
Later it will also be used to manage session items state, paving the road to session editing.
depends on #50 | priority | notify user about invalid tracks the very first fcm notification to be implemented is track validated the payload will be initially used only to inform user that a track was rejected because of some issues error message later it will also be used to manage session items state paving the road to session editing depends on | 1 |
670,033 | 22,666,746,619 | IssuesEvent | 2022-07-03 02:04:03 | ZeNyfh/gigavibe-java-edition | https://api.github.com/repos/ZeNyfh/gigavibe-java-edition | opened | now playing embed broken with single tracks | bug Priority: High Medium | if the track that started playing has no track after it, it will not send a "now playing" embed.
this isnt an issue while queuing a single track and listening to it, but it is if there is more than 1 track queued.
| 1.0 | now playing embed broken with single tracks - if the track that started playing has no track after it, it will not send a "now playing" embed.
this isnt an issue while queuing a single track and listening to it, but it is if there is more than 1 track queued.
| priority | now playing embed broken with single tracks if the track that started playing has no track after it it will not send a now playing embed this isnt an issue while queuing a single track and listening to it but it is if there is more than track queued | 1 |
247,536 | 7,919,567,976 | IssuesEvent | 2018-07-04 17:35:50 | ubc/compair | https://api.github.com/repos/ubc/compair | closed | Make login text and CAS/SAML login buttons configurable | back end developer suggestion enhancement front end medium priority | Add environment variables to store the html for for the login text and CAS/SAML buttons
default value will be what we currently use | 1.0 | Make login text and CAS/SAML login buttons configurable - Add environment variables to store the html for for the login text and CAS/SAML buttons
default value will be what we currently use | priority | make login text and cas saml login buttons configurable add environment variables to store the html for for the login text and cas saml buttons default value will be what we currently use | 1 |
782,367 | 27,494,875,158 | IssuesEvent | 2023-03-05 02:32:57 | CreeperMagnet/the-creepers-code | https://api.github.com/repos/CreeperMagnet/the-creepers-code | opened | Positional anchors do not drop ender pearls when broken and filled | priority: medium | This was caused by the fix for #127. Can be fixed by adding an exception for this specific case. | 1.0 | Positional anchors do not drop ender pearls when broken and filled - This was caused by the fix for #127. Can be fixed by adding an exception for this specific case. | priority | positional anchors do not drop ender pearls when broken and filled this was caused by the fix for can be fixed by adding an exception for this specific case | 1 |
545,994 | 15,981,957,047 | IssuesEvent | 2021-04-18 01:06:31 | ProjectSidewalk/SidewalkWebpage | https://api.github.com/repos/ProjectSidewalk/SidewalkWebpage | closed | Human avatar goes missing from the top-down map | Audit Priority: Medium bug in progress potential-intern-assignment | Reported by 4 CMSC434 users and several active users of the deployed app. One reported that it happened towards the end of a mission, but this might not be true for all users.
<img width="203" alt="screen shot 2016-10-07 at 5 44 58 pm" src="https://cloud.githubusercontent.com/assets/2873216/19206289/ce6bb176-8cb5-11e6-9421-dfb3d692ee0f.png">
| 1.0 | Human avatar goes missing from the top-down map - Reported by 4 CMSC434 users and several active users of the deployed app. One reported that it happened towards the end of a mission, but this might not be true for all users.
<img width="203" alt="screen shot 2016-10-07 at 5 44 58 pm" src="https://cloud.githubusercontent.com/assets/2873216/19206289/ce6bb176-8cb5-11e6-9421-dfb3d692ee0f.png">
| priority | human avatar goes missing from the top down map reported by users and several active users of the deployed app one reported that it happened towards the end of a mission but this might not be true for all users img width alt screen shot at pm src | 1 |
433,636 | 12,508,117,701 | IssuesEvent | 2020-06-02 15:05:25 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | RecurrenceEditor does not validate nor updates "End recurrence on" date | Bug C: Scheduler FP: In Development Kendo2 Next LIB Priority 3 SEV: Medium | ### Bug report
The Recurrence editor does not validate the [End recurrence on date](http://screencast.com/t/i4bLFndgAo). When the date in the EndOn DatePicker is invalid, a javascript error is thrown.
### Reproduction of the problem
1. Go to https://demos.telerik.com/kendo-ui/scheduler/index
1. Start creating event;
1. Set recurrence to Daily;
1. Select **End: On** option
### Current behavior
- When you set invalid date in the "End recurrence On" date picker, e.g. 60/11/2013, a javascript error is thrown: `"Uncaught TypeError: Cannot read property 'getFullYear' of null";`
- When you try to save the event with the invalid date for "End recurrence On" date picker, the popup is closed as if the event is created successfully, but it is not created or not created correctly;
Here is a video of the issue: http://screencast.com/t/2uBH4dxLyH
### Expected/desired behavior
The Recurrence date picker should not throw error and should validate as the Start and End DateTime pickers. Similar to this fixed issue: https://github.com/telerik/kendo-ui-core/issues/3846
### Environment
* **Kendo UI version:** 2019.2.619
* **Browser:** all | 1.0 | RecurrenceEditor does not validate nor updates "End recurrence on" date - ### Bug report
The Recurrence editor does not validate the [End recurrence on date](http://screencast.com/t/i4bLFndgAo). When the date in the EndOn DatePicker is invalid, a javascript error is thrown.
### Reproduction of the problem
1. Go to https://demos.telerik.com/kendo-ui/scheduler/index
1. Start creating event;
1. Set recurrence to Daily;
1. Select **End: On** option
### Current behavior
- When you set invalid date in the "End recurrence On" date picker, e.g. 60/11/2013, a javascript error is thrown: `"Uncaught TypeError: Cannot read property 'getFullYear' of null";`
- When you try to save the event with the invalid date for "End recurrence On" date picker, the popup is closed as if the event is created successfully, but it is not created or not created correctly;
Here is a video of the issue: http://screencast.com/t/2uBH4dxLyH
### Expected/desired behavior
The Recurrence date picker should not throw error and should validate as the Start and End DateTime pickers. Similar to this fixed issue: https://github.com/telerik/kendo-ui-core/issues/3846
### Environment
* **Kendo UI version:** 2019.2.619
* **Browser:** all | priority | recurrenceeditor does not validate nor updates end recurrence on date bug report the recurrence editor does not validate the when the date in the endon datepicker is invalid a javascript error is thrown reproduction of the problem go to start creating event set recurrence to daily select end on option current behavior when you set invalid date in the end recurrence on date picker e g a javascript error is thrown uncaught typeerror cannot read property getfullyear of null when you try to save the event with the invalid date for end recurrence on date picker the popup is closed as if the event is created successfully but it is not created or not created correctly here is a video of the issue expected desired behavior the recurrence date picker should not throw error and should validate as the start and end datetime pickers similar to this fixed issue environment kendo ui version browser all | 1 |
143,933 | 5,532,928,569 | IssuesEvent | 2017-03-21 12:00:01 | LikeMyBread/Saylua | https://api.github.com/repos/LikeMyBread/Saylua | closed | Write basic Javascript tests for Dungeons | Medium Priority | Created to help narrow the scope of #12.
Check for basic render / paint issues. | 1.0 | Write basic Javascript tests for Dungeons - Created to help narrow the scope of #12.
Check for basic render / paint issues. | priority | write basic javascript tests for dungeons created to help narrow the scope of check for basic render paint issues | 1 |
461,229 | 13,226,867,296 | IssuesEvent | 2020-08-18 01:18:42 | openshift/odo | https://api.github.com/repos/openshift/odo | closed | odo watch does not tell me the err on devfile validation failure | area/devfile kind/bug priority/Medium | /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:**
**Output of `odo version`:** master
## How did you run odo exactly?
`odo watch` but I had two default commands. It errors out but doesn't tell me what the error is and if I am newbie I wouldn't be able to figure out what the issue was
## Any logs, error output, etc?
```
$ odo watch
Waiting for something to change in /Users/maysun/dev/redhat/resources/springboot-ex
File /Users/maysun/dev/redhat/resources/springboot-ex/src/main/java/application/rest/v1/Example.java changed
Pushing files...
Validation
✗ Validating the devfile [54323ns]
Waiting for something to change in /Users/maysun/dev/redhat/resources/springboot-ex
File /Users/maysun/dev/redhat/resources/springboot-ex/src/main/java/application/rest/v1/Example.java changed
Pushing files...
Validation
✗ Validating the devfile [61381ns]
Waiting for something to change in /Users/maysun/dev/redhat/resources/springboot-ex
```
| 1.0 | odo watch does not tell me the err on devfile validation failure - /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:**
**Output of `odo version`:** master
## How did you run odo exactly?
`odo watch` but I had two default commands. It errors out but doesn't tell me what the error is and if I am newbie I wouldn't be able to figure out what the issue was
## Any logs, error output, etc?
```
$ odo watch
Waiting for something to change in /Users/maysun/dev/redhat/resources/springboot-ex
File /Users/maysun/dev/redhat/resources/springboot-ex/src/main/java/application/rest/v1/Example.java changed
Pushing files...
Validation
✗ Validating the devfile [54323ns]
Waiting for something to change in /Users/maysun/dev/redhat/resources/springboot-ex
File /Users/maysun/dev/redhat/resources/springboot-ex/src/main/java/application/rest/v1/Example.java changed
Pushing files...
Validation
✗ Validating the devfile [61381ns]
Waiting for something to change in /Users/maysun/dev/redhat/resources/springboot-ex
```
| priority | odo watch does not tell me the err on devfile validation failure kind bug welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using operating system output of odo version master how did you run odo exactly odo watch but i had two default commands it errors out but doesn t tell me what the error is and if i am newbie i wouldn t be able to figure out what the issue was any logs error output etc odo watch waiting for something to change in users maysun dev redhat resources springboot ex file users maysun dev redhat resources springboot ex src main java application rest example java changed pushing files validation ✗ validating the devfile waiting for something to change in users maysun dev redhat resources springboot ex file users maysun dev redhat resources springboot ex src main java application rest example java changed pushing files validation ✗ validating the devfile waiting for something to change in users maysun dev redhat resources springboot ex | 1 |
206,444 | 7,112,385,973 | IssuesEvent | 2018-01-17 16:52:18 | marklogic-community/data-explorer | https://api.github.com/repos/marklogic-community/data-explorer | opened | FERR-30 - Pull all of the docTypes from a database to use for creation of queries | Component - UI Config JIRA Migration Priority - Medium Type - Enhancement | **Original Reporter:** Greg Meddles
**Created:** 22/Jun/17 3:22 PM
# Description
As a configuration user, I would like to see a list of all of the document types that are available to me (based on security) so that I can immediately jump into creating queries and views based on the document in my databases | 1.0 | FERR-30 - Pull all of the docTypes from a database to use for creation of queries - **Original Reporter:** Greg Meddles
**Created:** 22/Jun/17 3:22 PM
# Description
As a configuration user, I would like to see a list of all of the document types that are available to me (based on security) so that I can immediately jump into creating queries and views based on the document in my databases | priority | ferr pull all of the doctypes from a database to use for creation of queries original reporter greg meddles created jun pm description as a configuration user i would like to see a list of all of the document types that are available to me based on security so that i can immediately jump into creating queries and views based on the document in my databases | 1 |
56,807 | 3,081,190,896 | IssuesEvent | 2015-08-22 13:25:25 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Двойной набор звёздочек перед сообщением в ЛС | bug imported Priority-Medium | _From [toss.Alexey](https://code.google.com/u/toss.Alexey/) on November 14, 2012 22:03:09_
Двойной набор звёздочек перед сообщениями в ЛС о приходе/уходе пользователя
[00:59:40] *** *** Пользователь ушёл [OpChat - dchub://127.0.0.1:411] ***
[00:59:42] *** *** Пользователь пришёл [OpChat - hub] ***
11947
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=854_ | 1.0 | Двойной набор звёздочек перед сообщением в ЛС - _From [toss.Alexey](https://code.google.com/u/toss.Alexey/) on November 14, 2012 22:03:09_
Двойной набор звёздочек перед сообщениями в ЛС о приходе/уходе пользователя
[00:59:40] *** *** Пользователь ушёл [OpChat - dchub://127.0.0.1:411] ***
[00:59:42] *** *** Пользователь пришёл [OpChat - hub] ***
11947
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=854_ | priority | двойной набор звёздочек перед сообщением в лс from on november двойной набор звёздочек перед сообщениями в лс о приходе уходе пользователя пользователь ушёл пользователь пришёл original issue | 1 |
25,563 | 2,683,844,301 | IssuesEvent | 2015-03-28 11:28:42 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | PictureView mod15/15a/15b - проблемы загрузки плагина при выводе через DirectX | 2–5 stars bug imported Priority-Medium | _From [victo...@mail333.com](https://code.google.com/u/114732384912597087095/) on October 29, 2009 12:17:01_
Версия ОС: XP SP3
Версия FAR: 2.0.1187
При настройке вывода через DirectX плагин судя по наблюдению в Process
Explorer не загружается, более того, происходит видимо его аварийное
завершение на форматах Post Script и DjVu. При выводе через средства GDI+
явление не воспроизводится. Точной причины пока назвать не могу,
предполагаю ошибку в DX.pdv
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=122_ | 1.0 | PictureView mod15/15a/15b - проблемы загрузки плагина при выводе через DirectX - _From [victo...@mail333.com](https://code.google.com/u/114732384912597087095/) on October 29, 2009 12:17:01_
Версия ОС: XP SP3
Версия FAR: 2.0.1187
При настройке вывода через DirectX плагин судя по наблюдению в Process
Explorer не загружается, более того, происходит видимо его аварийное
завершение на форматах Post Script и DjVu. При выводе через средства GDI+
явление не воспроизводится. Точной причины пока назвать не могу,
предполагаю ошибку в DX.pdv
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=122_ | priority | pictureview проблемы загрузки плагина при выводе через directx from on october версия ос xp версия far при настройке вывода через directx плагин судя по наблюдению в process explorer не загружается более того происходит видимо его аварийное завершение на форматах post script и djvu при выводе через средства gdi явление не воспроизводится точной причины пока назвать не могу предполагаю ошибку в dx pdv original issue | 1 |
613,802 | 19,098,956,179 | IssuesEvent | 2021-11-29 19:59:02 | WordPress/learn | https://api.github.com/repos/WordPress/learn | opened | Add workshop filter for WordPress version | [Type] Enhancement [Component] Learn Theme [Component] Learn Plugin [Priority] Medium | We have the `wporg_wp_version` taxonomy in the dashboard so we can tag content for specific versions of WordPress and it would be great if we could add this to the workshop filters so that people could find content that is version-specific.
If we could have this live before the release of 5.9 then it can be used in the about page and announcement posts and we can link directly to the filtered content. | 1.0 | Add workshop filter for WordPress version - We have the `wporg_wp_version` taxonomy in the dashboard so we can tag content for specific versions of WordPress and it would be great if we could add this to the workshop filters so that people could find content that is version-specific.
If we could have this live before the release of 5.9 then it can be used in the about page and announcement posts and we can link directly to the filtered content. | priority | add workshop filter for wordpress version we have the wporg wp version taxonomy in the dashboard so we can tag content for specific versions of wordpress and it would be great if we could add this to the workshop filters so that people could find content that is version specific if we could have this live before the release of then it can be used in the about page and announcement posts and we can link directly to the filtered content | 1 |
593,861 | 18,018,675,091 | IssuesEvent | 2021-09-16 16:32:41 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | opened | [GM-VV] (MARLO) Deliverable Status Adjustments | Priority - Medium Type -Task | Deliverable Status Adjustments
- [ ] Fix deliverableMetadata wrong casting on equals() method.
- [ ] Limit deliverables to only display the ones marked as "Complete".
- [ ] Fix validations for Deliverable dissemination, already disseminated and open access fields.
- [ ] Add validations for Deliverable status, Year of expected completion and New expected year of completion:
- [ ] When a Deliverable Year of expected completion is less than 2021, it will have enabled the field New expected year of completion, if not this field will be disabled and not visible.
- [ ] When a Year Phase is less than 2021, the Deliverable status options will be disabled, if not then all options are going to be enables except "Extended".
**Move to Closed when:** Moved to Dev.
| 1.0 | [GM-VV] (MARLO) Deliverable Status Adjustments - Deliverable Status Adjustments
- [ ] Fix deliverableMetadata wrong casting on equals() method.
- [ ] Limit deliverables to only display the ones marked as "Complete".
- [ ] Fix validations for Deliverable dissemination, already disseminated and open access fields.
- [ ] Add validations for Deliverable status, Year of expected completion and New expected year of completion:
- [ ] When a Deliverable Year of expected completion is less than 2021, it will have enabled the field New expected year of completion, if not this field will be disabled and not visible.
- [ ] When a Year Phase is less than 2021, the Deliverable status options will be disabled, if not then all options are going to be enables except "Extended".
**Move to Closed when:** Moved to Dev.
| priority | marlo deliverable status adjustments deliverable status adjustments fix deliverablemetadata wrong casting on equals method limit deliverables to only display the ones marked as complete fix validations for deliverable dissemination already disseminated and open access fields add validations for deliverable status year of expected completion and new expected year of completion when a deliverable year of expected completion is less than it will have enabled the field new expected year of completion if not this field will be disabled and not visible when a year phase is less than the deliverable status options will be disabled if not then all options are going to be enables except extended move to closed when moved to dev | 1 |
738,047 | 25,543,036,822 | IssuesEvent | 2022-11-29 16:35:51 | Heroic-Games-Launcher/HeroicGamesLauncher | https://api.github.com/repos/Heroic-Games-Launcher/HeroicGamesLauncher | opened | [macOS] Add support for Wineskin and Wine-crossover as well as macOS DXVK installation | feature request macOS medium-priority | ### Problem description
Heroic currently only works with Crossover, although provides great compatibility, is a paid software.
After some investigation, it seems that is possible to use other tools to play Windows games on macOS like wine-crossover (Catalina+ only though) and WineSkin.
### Feature description
- Auto-detect Wine-crossover and maybe other wine versions installed globally on macOS.
- Auto-detect WineSkin Wrappers (bottles) and make them available to select into Heroic.
- Gives the ability to install dxvk-macOS if Wine type is different than Crossover.
### Alternatives
_No response_
### Additional information
Some of the investigation is registered on this issue on WineSkin github with some information on how to use those tools in Heroic:
https://github.com/Gcenx/WineskinServer/issues/329investigations | 1.0 | [macOS] Add support for Wineskin and Wine-crossover as well as macOS DXVK installation - ### Problem description
Heroic currently only works with Crossover, although provides great compatibility, is a paid software.
After some investigation, it seems that is possible to use other tools to play Windows games on macOS like wine-crossover (Catalina+ only though) and WineSkin.
### Feature description
- Auto-detect Wine-crossover and maybe other wine versions installed globally on macOS.
- Auto-detect WineSkin Wrappers (bottles) and make them available to select into Heroic.
- Gives the ability to install dxvk-macOS if Wine type is different than Crossover.
### Alternatives
_No response_
### Additional information
Some of the investigation is registered on this issue on WineSkin github with some information on how to use those tools in Heroic:
https://github.com/Gcenx/WineskinServer/issues/329investigations | priority | add support for wineskin and wine crossover as well as macos dxvk installation problem description heroic currently only works with crossover although provides great compatibility is a paid software after some investigation it seems that is possible to use other tools to play windows games on macos like wine crossover catalina only though and wineskin feature description auto detect wine crossover and maybe other wine versions installed globally on macos auto detect wineskin wrappers bottles and make them available to select into heroic gives the ability to install dxvk macos if wine type is different than crossover alternatives no response additional information some of the investigation is registered on this issue on wineskin github with some information on how to use those tools in heroic | 1 |
137,086 | 5,293,752,400 | IssuesEvent | 2017-02-09 08:43:22 | hpi-swt2/workshop-portal | https://api.github.com/repos/hpi-swt2/workshop-portal | closed | Anmerkungen anderer Coaches auf Übersicht durch Icon anzeigen | Medium Priority needs review team-helene | Als Organizer möchte ich bei der Auswahl der Teilnehmer die Anmerkungen anderer Coaches, wenn es welche gibt, auf der Bewerberübersichts-Seite durch ein Icon angezeigt bekommen. (Einfach rechts neben "Details"). Beim Rüberhalten der Maus soll von der Bemerkung 10 Wörter angezeigt werden und dann ein "..." .
Das Icon kann so ähnlich wie dieses aussehen:

| 1.0 | Anmerkungen anderer Coaches auf Übersicht durch Icon anzeigen - Als Organizer möchte ich bei der Auswahl der Teilnehmer die Anmerkungen anderer Coaches, wenn es welche gibt, auf der Bewerberübersichts-Seite durch ein Icon angezeigt bekommen. (Einfach rechts neben "Details"). Beim Rüberhalten der Maus soll von der Bemerkung 10 Wörter angezeigt werden und dann ein "..." .
Das Icon kann so ähnlich wie dieses aussehen:

| priority | anmerkungen anderer coaches auf übersicht durch icon anzeigen als organizer möchte ich bei der auswahl der teilnehmer die anmerkungen anderer coaches wenn es welche gibt auf der bewerberübersichts seite durch ein icon angezeigt bekommen einfach rechts neben details beim rüberhalten der maus soll von der bemerkung wörter angezeigt werden und dann ein das icon kann so ähnlich wie dieses aussehen | 1 |
158,013 | 6,020,350,024 | IssuesEvent | 2017-06-07 16:15:17 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | Map should error nicely if not passed a 2D array | Affects Released Bug? Effort Low In Progress MozSprint Package Novice Priority Medium | Hello, I am getting the following errors when attempting to create a map from the attached fits file:
``` python
In [6]: map.Map('1130643840_vv_c076-077_f8-14_t034345_t034444_XX_d002.fits')
Out[6]: ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/core/formatters.pyc in __call__(self, obj)
697 type_pprinters=self.type_printers,
698 deferred_pprinters=self.deferred_printers)
--> 699 printer.pretty(obj)
700 printer.flush()
701 return stream.getvalue()
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/lib/pretty.pyc in pretty(self, obj)
381 if callable(meth):
382 return meth(obj, self, cycle)
--> 383 return _default_pprint(obj, self, cycle)
384 finally:
385 self.end_group()
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/lib/pretty.pyc in _default_pprint(obj, p, cycle)
501 if _safe_getattr(klass, '__repr__', None) not in _baseclass_reprs:
502 # A user-provided repr. Find newlines and replace them with p.break_()
--> 503 _repr_pprint(obj, p, cycle)
504 return
505 p.begin_group(1, '<')
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/lib/pretty.pyc in _repr_pprint(obj, p, cycle)
692 """A pprint that just redirects to the normal repr function."""
693 # Find newlines and replace them with p.break_()
--> 694 output = repr(obj)
695 for idx,output_line in enumerate(output.splitlines()):
696 if idx:
/Users/kamen/anaconda/lib/python2.7/site-packages/sunpy/map/mapbase.py in __repr__(self)
221 obs=self.observatory, inst=self.instrument, det=self.detector,
222 meas=self.measurement, wave=self.wavelength, date=self.date, dt=self.exposure_time,
--> 223 dim=u.Quantity(self.dimensions),
224 scale=u.Quantity(self.scale),
225 tmf=TIME_FORMAT)
/Users/kamen/anaconda/lib/python2.7/site-packages/sunpy/map/mapbase.py in dimensions(self)
301 The dimensions of the array (x axis first, y axis second).
302 """
--> 303 return Pair(*u.Quantity(np.flipud(self.data.shape), 'pixel'))
304
305 @property
TypeError: __new__() takes exactly 3 arguments (5 given)
```
Any thoughts? The fits file contains radio observations of the Sun by the Murchison Widefield Array telescope in Australia. Thanks!
[1130643840_c076-077_f8-14_t034345_t034444_XX_d002.fits.zip](https://github.com/sunpy/sunpy/files/527362/1130643840_c076-077_f8-14_t034345_t034444_XX_d002.fits.zip)
| 1.0 | Map should error nicely if not passed a 2D array - Hello, I am getting the following errors when attempting to create a map from the attached fits file:
``` python
In [6]: map.Map('1130643840_vv_c076-077_f8-14_t034345_t034444_XX_d002.fits')
Out[6]: ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/core/formatters.pyc in __call__(self, obj)
697 type_pprinters=self.type_printers,
698 deferred_pprinters=self.deferred_printers)
--> 699 printer.pretty(obj)
700 printer.flush()
701 return stream.getvalue()
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/lib/pretty.pyc in pretty(self, obj)
381 if callable(meth):
382 return meth(obj, self, cycle)
--> 383 return _default_pprint(obj, self, cycle)
384 finally:
385 self.end_group()
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/lib/pretty.pyc in _default_pprint(obj, p, cycle)
501 if _safe_getattr(klass, '__repr__', None) not in _baseclass_reprs:
502 # A user-provided repr. Find newlines and replace them with p.break_()
--> 503 _repr_pprint(obj, p, cycle)
504 return
505 p.begin_group(1, '<')
/Users/kamen/anaconda/lib/python2.7/site-packages/IPython/lib/pretty.pyc in _repr_pprint(obj, p, cycle)
692 """A pprint that just redirects to the normal repr function."""
693 # Find newlines and replace them with p.break_()
--> 694 output = repr(obj)
695 for idx,output_line in enumerate(output.splitlines()):
696 if idx:
/Users/kamen/anaconda/lib/python2.7/site-packages/sunpy/map/mapbase.py in __repr__(self)
221 obs=self.observatory, inst=self.instrument, det=self.detector,
222 meas=self.measurement, wave=self.wavelength, date=self.date, dt=self.exposure_time,
--> 223 dim=u.Quantity(self.dimensions),
224 scale=u.Quantity(self.scale),
225 tmf=TIME_FORMAT)
/Users/kamen/anaconda/lib/python2.7/site-packages/sunpy/map/mapbase.py in dimensions(self)
301 The dimensions of the array (x axis first, y axis second).
302 """
--> 303 return Pair(*u.Quantity(np.flipud(self.data.shape), 'pixel'))
304
305 @property
TypeError: __new__() takes exactly 3 arguments (5 given)
```
Any thoughts? The fits file contains radio observations of the Sun by the Murchison Widefield Array telescope in Australia. Thanks!
[1130643840_c076-077_f8-14_t034345_t034444_XX_d002.fits.zip](https://github.com/sunpy/sunpy/files/527362/1130643840_c076-077_f8-14_t034345_t034444_XX_d002.fits.zip)
| priority | map should error nicely if not passed a array hello i am getting the following errors when attempting to create a map from the attached fits file python in map map vv xx fits out typeerror traceback most recent call last users kamen anaconda lib site packages ipython core formatters pyc in call self obj type pprinters self type printers deferred pprinters self deferred printers printer pretty obj printer flush return stream getvalue users kamen anaconda lib site packages ipython lib pretty pyc in pretty self obj if callable meth return meth obj self cycle return default pprint obj self cycle finally self end group users kamen anaconda lib site packages ipython lib pretty pyc in default pprint obj p cycle if safe getattr klass repr none not in baseclass reprs a user provided repr find newlines and replace them with p break repr pprint obj p cycle return p begin group users kamen anaconda lib site packages ipython lib pretty pyc in repr pprint obj p cycle a pprint that just redirects to the normal repr function find newlines and replace them with p break output repr obj for idx output line in enumerate output splitlines if idx users kamen anaconda lib site packages sunpy map mapbase py in repr self obs self observatory inst self instrument det self detector meas self measurement wave self wavelength date self date dt self exposure time dim u quantity self dimensions scale u quantity self scale tmf time format users kamen anaconda lib site packages sunpy map mapbase py in dimensions self the dimensions of the array x axis first y axis second return pair u quantity np flipud self data shape pixel property typeerror new takes exactly arguments given any thoughts the fits file contains radio observations of the sun by the murchison widefield array telescope in australia thanks | 1 |
244,480 | 7,875,505,918 | IssuesEvent | 2018-06-25 20:39:42 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: Solid Ground Street Lamps | Medium Priority | **Version:** 0.7.3.1 beta
**Steps to Reproduce:**
Street Lamps on Asphalt roads have no Solid ground and are inaktiv
**Expected behavior:**
Lamp is shines and stay on Solid ground
**Actual behavior:**
have no Solid ground and do not light up | 1.0 | USER ISSUE: Solid Ground Street Lamps - **Version:** 0.7.3.1 beta
**Steps to Reproduce:**
Street Lamps on Asphalt roads have no Solid ground and are inaktiv
**Expected behavior:**
Lamp is shines and stay on Solid ground
**Actual behavior:**
have no Solid ground and do not light up | priority | user issue solid ground street lamps version beta steps to reproduce street lamps on asphalt roads have no solid ground and are inaktiv expected behavior lamp is shines and stay on solid ground actual behavior have no solid ground and do not light up | 1 |
146,678 | 5,625,836,558 | IssuesEvent | 2017-04-04 20:22:46 | SCIInstitute/ALMA-TDA | https://api.github.com/repos/SCIInstitute/ALMA-TDA | closed | add 3d contour tree back | enhancement medium priority | contour tree calculation seemed ok. however, simplification of the scalar field just turned things black... | 1.0 | add 3d contour tree back - contour tree calculation seemed ok. however, simplification of the scalar field just turned things black... | priority | add contour tree back contour tree calculation seemed ok however simplification of the scalar field just turned things black | 1 |
715,977 | 24,617,165,140 | IssuesEvent | 2022-10-15 13:17:18 | AY2223S1-CS2103T-T13-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-T13-4/tp | closed | As a user, I can edit my contact's information | type.Story priority.Medium | so that I can keep their information up to date. | 1.0 | As a user, I can edit my contact's information - so that I can keep their information up to date. | priority | as a user i can edit my contact s information so that i can keep their information up to date | 1 |
777,078 | 27,267,713,143 | IssuesEvent | 2023-02-22 19:27:55 | mohammed-shakir/Camera-Tracking-Using-UWB-Navigation | https://api.github.com/repos/mohammed-shakir/Camera-Tracking-Using-UWB-Navigation | closed | Improve fram-rate | bug enhancement medium priority quick fix | Improve the frame rate for camera, because right now it is very slow, and is running on a low fram-rate | 1.0 | Improve fram-rate - Improve the frame rate for camera, because right now it is very slow, and is running on a low fram-rate | priority | improve fram rate improve the frame rate for camera because right now it is very slow and is running on a low fram rate | 1 |
721,574 | 24,831,795,103 | IssuesEvent | 2022-10-26 04:39:52 | AY2223S1-CS2103T-T12-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-T12-4/tp | closed | add a recurring task associated with a patient | type.Story priority.Medium type.TimeBased | As a private nurse I want to add a recurring task associated with a patient so that I can keep track of tasks that I have to do repeatedly (e.g. weekly visits) | 1.0 | add a recurring task associated with a patient - As a private nurse I want to add a recurring task associated with a patient so that I can keep track of tasks that I have to do repeatedly (e.g. weekly visits) | priority | add a recurring task associated with a patient as a private nurse i want to add a recurring task associated with a patient so that i can keep track of tasks that i have to do repeatedly e g weekly visits | 1 |
671,234 | 22,749,763,513 | IssuesEvent | 2022-07-07 12:14:18 | phylum-dev/cli | https://api.github.com/repos/phylum-dev/cli | closed | Path segment should be removed when prompting for permissions | enhancement medium priority extensions | Currently when an extension asks for a URL permission which includes path elements, the entire URL will be printed (e.g. `phylum.io/path/segments`). However deno's extension system is solely based on exactly matching subdomain and domain, while ignoring any path segments.
To prevent extensions from tricking the user into granting unexpected permissions, we should strip the path segments when prompting for permissions to clearly communicate that everything under the domain will be accessible, regardless of path segments. | 1.0 | Path segment should be removed when prompting for permissions - Currently when an extension asks for a URL permission which includes path elements, the entire URL will be printed (e.g. `phylum.io/path/segments`). However deno's extension system is solely based on exactly matching subdomain and domain, while ignoring any path segments.
To prevent extensions from tricking the user into granting unexpected permissions, we should strip the path segments when prompting for permissions to clearly communicate that everything under the domain will be accessible, regardless of path segments. | priority | path segment should be removed when prompting for permissions currently when an extension asks for a url permission which includes path elements the entire url will be printed e g phylum io path segments however deno s extension system is solely based on exactly matching subdomain and domain while ignoring any path segments to prevent extensions from tricking the user into granting unexpected permissions we should strip the path segments when prompting for permissions to clearly communicate that everything under the domain will be accessible regardless of path segments | 1 |
238,856 | 7,783,621,983 | IssuesEvent | 2018-06-06 10:33:23 | natolh/linnote | https://api.github.com/repos/natolh/linnote | opened | Top navigation appearing under sticky header section on small screens | BUG Good First Issue Medium Priority User Interface | When using small screens, the main navigation (inside the header) is appearing beneath the sticky section of the header because the sticky section has a smaller height than the main navigation. | 1.0 | Top navigation appearing under sticky header section on small screens - When using small screens, the main navigation (inside the header) is appearing beneath the sticky section of the header because the sticky section has a smaller height than the main navigation. | priority | top navigation appearing under sticky header section on small screens when using small screens the main navigation inside the header is appearing beneath the sticky section of the header because the sticky section has a smaller height than the main navigation | 1 |
222,611 | 7,434,383,992 | IssuesEvent | 2018-03-26 10:50:13 | Arquisoft/InciManager_i2a | https://api.github.com/repos/Arquisoft/InciManager_i2a | opened | Review the id/userId problem with Agents | priority: medium question | We should take a look at how we handle the ids in the code because maybe is not working properly. | 1.0 | Review the id/userId problem with Agents - We should take a look at how we handle the ids in the code because maybe is not working properly. | priority | review the id userid problem with agents we should take a look at how we handle the ids in the code because maybe is not working properly | 1 |
831,388 | 32,047,202,954 | IssuesEvent | 2023-09-23 05:43:27 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | ECR OCI Credentials for Docker and Helm repositories | type:feature priority-3-medium manager:helm datasource:docker status:ready | ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
34.47.1
### If you're self-hosting Renovate, select which platform you are using.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
In #19239 I describe the setup and make a request for the necessary hostRules.
Recap: I have an ECR OCI helm chart with ECR OCI helm sub-charts as dependencies.
I now have a better understanding of the issue.
Looking at the helm manager code https://github.com/renovatebot/renovate/blob/main/lib/modules/manager/helmv3/artifacts.ts#L29
I can see that if the repository is an OCI repository, it will swap "oci://" for "https://", which matches the documentation here https://docs.renovatebot.com/modules/manager/helmv3/
Here https://github.com/renovatebot/renovate/blob/main/lib/modules/manager/helmv3/artifacts.ts#L57 uses the username and password from the host rules.
For ECR OCI repositories, this should match this
```
aws ecr get-login-password --region \<region> | helm registry login --username AWS --password-stdin \<account>.dkr.ecr.ca-central-1.amazonaws.com
```
If I create the following hostRule
```
{
"hostType": "docker",
"matchHost": "https://<account-id>.dkr.ecr.ca-central-1.amazonaws.com",
"username": "AWS",
"password": "*********"
}
```
I get an error related to finding the tags for the repository.
To get the tags, the docker data source is used.
This expects using the following authentication for ECR repositories https://github.com/renovatebot/renovate/blob/8e4b5231f812aba49191b0f64e24e8ad7d7a4d14/lib/modules/datasource/docker/index.ts#L232
This is expecting `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
So I need to pass the AWS credentials for ECR repositories but a token for Helm login.
My suggestion is that allow the token field to be used for dynamic authentication.
This is similar to this request https://github.com/renovatebot/renovate/issues/16912
and a bit like this https://docs.renovatebot.com/docker/#using-short-lived-access-tokens
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Updating Helm artifacts (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
DEBUG: Setting CONTAINERBASE_CACHE_DIR to /tmp/renovate/cache/containerbase (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
DEBUG: Using containerbase dynamic installs (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
TRACE: Authorization already set (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repos/helm/helm/releases?per_page=100"
TRACE: got request (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repos/helm/helm/releases?per_page=100",
"options": {
"method": "get",
"context": {"hostType": "github-releases"},
"hostType": "github-releases",
"baseUrl": "https://api.github.com/",
"paginate": true,
"responseType": "json",
"headers": {
"accept": "application/json, application/vnd.github.v3+json",
"user-agent": "RenovateBot/34.47.1 (https://github.com/renovatebot/renovate)",
"authorization": "***********"
},
"throwHttpErrors": true,
"hooks": {"beforeRedirect": ["[function]"]},
"timeout": 60000
}
TRACE: Authorization already set (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repositories/43723161/releases?per_page=100&page=2"
TRACE: got request (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repositories/43723161/releases?per_page=100&page=2",
"options": {
"method": "get",
"context": {"hostType": "github-releases"},
"hostType": "github-releases",
"baseUrl": "https://api.github.com/",
"paginate": false,
"responseType": "json",
"headers": {
"accept": "application/json, application/vnd.github.v3+json",
"user-agent": "RenovateBot/34.47.1 (https://github.com/renovatebot/renovate)",
"authorization": "***********"
},
"throwHttpErrors": true,
"hooks": {"beforeRedirect": ["[function]"]},
"timeout": 60000
}
DEBUG: Resolved stable matching version (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"toolName": "helm",
"constraint": undefined,
"resolvedVersion": "v3.10.2"
DEBUG: Executing command (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"command": "install-tool helm v3.10.2"
TRACE: Command options (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"commandOptions": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
}
DEBUG: exec completed (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"durationMs": 32,
"stdout": "tool helm v3.10.2 is already installed\ntool is already linked: helm v3.10.2\nInstalled v2 /usr/local/buildpack/tools/v2/helm.sh in 0 seconds\nskip cleanup, not a docker build: log-5whbg\n",
"stderr": ""
DEBUG: Executing command (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"command": "helm repo add stable --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories https://charts.helm.sh/stable"
TRACE: Command options (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"commandOptions": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
}
DEBUG: exec completed (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"durationMs": 1326,
"stdout": "\"stable\" has been added to your repositories\n",
"stderr": ""
DEBUG: Executing command (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"command": "helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>"
TRACE: Command options (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"commandOptions": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
}
DEBUG: rawExec err (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"err": {
"name": "ExecError",
"cmd": "/bin/sh -c helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>",
"stderr": "Error: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stdout": "Hang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"stable\" chart repository\nUpdate Complete. ⎈Happy Helming!⎈\nSaving 2 charts\nDownloading <sub-chart name> from repo oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com\nSave error occurred: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"options": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
},
"exitCode": 1,
"message": "Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stack": "ExecError: Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n\n at ChildProcess.<anonymous> (/usr/src/app/node_modules/renovate/lib/util/exec/common.ts:99:11)\n at ChildProcess.emit (node:events:525:35)\n at ChildProcess.emit (node:domain:489:12)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)"
}
DEBUG: Failed to update Helm lock file (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"err": {
"name": "ExecError",
"cmd": "/bin/sh -c helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>",
"stderr": "Error: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stdout": "Hang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"stable\" chart repository\nUpdate Complete. ⎈Happy Helming!⎈\nSaving 2 charts\nDownloading <sub-chart name> from repo oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com\nSave error occurred: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"options": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
},
"exitCode": 1,
"message": "Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stack": "ExecError: Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n\n at ChildProcess.<anonymous> (/usr/src/app/node_modules/renovate/lib/util/exec/common.ts:99:11)\n at ChildProcess.emit (node:events:525:35)\n at ChildProcess.emit (node:domain:489:12)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)"
}
DEBUG: Updated 1 package files (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)```
</details>
### Have you created a minimal reproduction repository?
No | 1.0 | ECR OCI Credentials for Docker and Helm repositories - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
34.47.1
### If you're self-hosting Renovate, select which platform you are using.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
In #19239 I describe the setup and make a request for the necessary hostRules.
Recap: I have an ECR OCI helm chart with ECR OCI helm sub-charts as dependencies.
I now have a better understanding of the issue.
Looking at the helm manager code https://github.com/renovatebot/renovate/blob/main/lib/modules/manager/helmv3/artifacts.ts#L29
I can see that if the repository is an OCI repository, it will swap "oci://" for "https://", which matches the documentation here https://docs.renovatebot.com/modules/manager/helmv3/
Here https://github.com/renovatebot/renovate/blob/main/lib/modules/manager/helmv3/artifacts.ts#L57 uses the username and password from the host rules.
For ECR OCI repositories, this should match this
```
aws ecr get-login-password --region \<region> | helm registry login --username AWS --password-stdin \<account>.dkr.ecr.ca-central-1.amazonaws.com
```
If I create the following hostRule
```
{
"hostType": "docker",
"matchHost": "https://<account-id>.dkr.ecr.ca-central-1.amazonaws.com",
"username": "AWS",
"password": "*********"
}
```
I get an error related to finding the tags for the repository.
To get the tags, the docker data source is used.
This expects using the following authentication for ECR repositories https://github.com/renovatebot/renovate/blob/8e4b5231f812aba49191b0f64e24e8ad7d7a4d14/lib/modules/datasource/docker/index.ts#L232
This is expecting `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
So I need to pass the AWS credentials for ECR repositories but a token for Helm login.
My suggestion is that allow the token field to be used for dynamic authentication.
This is similar to this request https://github.com/renovatebot/renovate/issues/16912
and a bit like this https://docs.renovatebot.com/docker/#using-short-lived-access-tokens
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Updating Helm artifacts (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
DEBUG: Setting CONTAINERBASE_CACHE_DIR to /tmp/renovate/cache/containerbase (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
DEBUG: Using containerbase dynamic installs (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
TRACE: Authorization already set (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repos/helm/helm/releases?per_page=100"
TRACE: got request (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repos/helm/helm/releases?per_page=100",
"options": {
"method": "get",
"context": {"hostType": "github-releases"},
"hostType": "github-releases",
"baseUrl": "https://api.github.com/",
"paginate": true,
"responseType": "json",
"headers": {
"accept": "application/json, application/vnd.github.v3+json",
"user-agent": "RenovateBot/34.47.1 (https://github.com/renovatebot/renovate)",
"authorization": "***********"
},
"throwHttpErrors": true,
"hooks": {"beforeRedirect": ["[function]"]},
"timeout": 60000
}
TRACE: Authorization already set (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repositories/43723161/releases?per_page=100&page=2"
TRACE: got request (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"url": "https://api.github.com/repositories/43723161/releases?per_page=100&page=2",
"options": {
"method": "get",
"context": {"hostType": "github-releases"},
"hostType": "github-releases",
"baseUrl": "https://api.github.com/",
"paginate": false,
"responseType": "json",
"headers": {
"accept": "application/json, application/vnd.github.v3+json",
"user-agent": "RenovateBot/34.47.1 (https://github.com/renovatebot/renovate)",
"authorization": "***********"
},
"throwHttpErrors": true,
"hooks": {"beforeRedirect": ["[function]"]},
"timeout": 60000
}
DEBUG: Resolved stable matching version (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"toolName": "helm",
"constraint": undefined,
"resolvedVersion": "v3.10.2"
DEBUG: Executing command (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"command": "install-tool helm v3.10.2"
TRACE: Command options (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"commandOptions": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
}
DEBUG: exec completed (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"durationMs": 32,
"stdout": "tool helm v3.10.2 is already installed\ntool is already linked: helm v3.10.2\nInstalled v2 /usr/local/buildpack/tools/v2/helm.sh in 0 seconds\nskip cleanup, not a docker build: log-5whbg\n",
"stderr": ""
DEBUG: Executing command (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"command": "helm repo add stable --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories https://charts.helm.sh/stable"
TRACE: Command options (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"commandOptions": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
}
DEBUG: exec completed (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"durationMs": 1326,
"stdout": "\"stable\" has been added to your repositories\n",
"stderr": ""
DEBUG: Executing command (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"command": "helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>"
TRACE: Command options (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"commandOptions": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
}
DEBUG: rawExec err (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"err": {
"name": "ExecError",
"cmd": "/bin/sh -c helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>",
"stderr": "Error: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stdout": "Hang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"stable\" chart repository\nUpdate Complete. ⎈Happy Helming!⎈\nSaving 2 charts\nDownloading <sub-chart name> from repo oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com\nSave error occurred: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"options": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
},
"exitCode": 1,
"message": "Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stack": "ExecError: Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n\n at ChildProcess.<anonymous> (/usr/src/app/node_modules/renovate/lib/util/exec/common.ts:99:11)\n at ChildProcess.emit (node:events:525:35)\n at ChildProcess.emit (node:domain:489:12)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)"
}
DEBUG: Failed to update Helm lock file (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)
"err": {
"name": "ExecError",
"cmd": "/bin/sh -c helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>",
"stderr": "Error: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stdout": "Hang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"stable\" chart repository\nUpdate Complete. ⎈Happy Helming!⎈\nSaving 2 charts\nDownloading <sub-chart name> from repo oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com\nSave error occurred: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"options": {
"cwd": "/tmp/renovate/repos/github/<github-org>/<repo name>",
"encoding": "utf-8",
"env": {
"HELM_EXPERIMENTAL_OCI": "1",
"HOME": "/home/ubuntu",
"PATH": "/home/ubuntu/.local/bin:/go/bin:/home/ubuntu/bin:/opt/buildpack/tools/python/3.11.0/bin:/home/ubuntu/.npm-global/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/renovate/cache/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/renovate/cache/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
},
"exitCode": 1,
"message": "Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n",
"stack": "ExecError: Command failed: helm dependency update --registry-config /tmp/renovate/cache/__renovate-private-cache/registry.json --repository-config /tmp/renovate/cache/__renovate-private-cache/repositories.yaml --repository-cache /tmp/renovate/cache/__renovate-private-cache/repositories <repo name>\nError: could not download oci://<aws-account>.dkr.ecr.ca-central-1.amazonaws.com/<sub-chart name>: pulling from host <aws-account>.dkr.ecr.ca-central-1.amazonaws.com failed with status code [manifests 0.5.2]: 401 Unauthorized\n\n at ChildProcess.<anonymous> (/usr/src/app/node_modules/renovate/lib/util/exec/common.ts:99:11)\n at ChildProcess.emit (node:events:525:35)\n at ChildProcess.emit (node:domain:489:12)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)"
}
DEBUG: Updated 1 package files (repository=<github-org>/<repo name>, branch=renovate/<branch name>-0.x)```
</details>
### Have you created a minimal reproduction repository?
No | priority | ecr oci credentials for docker and helm repositories how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run if you re self hosting renovate select which platform you are using github com if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug in i describe the setup and make a request for the necessary hostrules recap i have an ecr oci helm chart with ecr oci helm sub charts as dependencies i now have a better understanding of the issue looking at the helm manager code i can see that if the repository is an oci repository it will swap oci for which matches the documentation here here uses the username and password from the host rules for ecr oci repositories this should match this aws ecr get login password region helm registry login username aws password stdin dkr ecr ca central amazonaws com if i create the following hostrule hosttype docker matchhost username aws password i get an error related to finding the tags for the repository to get the tags the docker data source is used this expects using the following authentication for ecr repositories this is expecting aws access key id and aws secret access key so i need to pass the aws credentials for ecr repositories but a token for helm login my suggestion is that allow the token field to be used for dynamic authentication this is similar to this request and a bit like this relevant debug logs logs debug updating helm artifacts repository branch renovate x debug setting containerbase cache dir to tmp renovate cache containerbase repository branch renovate x debug using containerbase dynamic installs repository branch renovate x trace authorization already set repository branch renovate x url trace got request repository branch renovate x url options method get context hosttype github releases hosttype github releases baseurl paginate true responsetype json headers accept application json application vnd github json user agent renovatebot authorization throwhttperrors true hooks beforeredirect timeout trace authorization already set repository branch renovate x url trace got request repository branch renovate x url options method get context hosttype github releases hosttype github releases baseurl paginate false responsetype json headers accept application json application vnd github json user agent renovatebot authorization throwhttperrors true hooks beforeredirect timeout debug resolved stable matching version repository branch renovate x toolname helm constraint undefined resolvedversion debug executing command repository branch renovate x command install tool helm trace command options repository branch renovate x commandoptions cwd tmp renovate repos github encoding utf env helm experimental oci home home ubuntu path home ubuntu local bin go bin home ubuntu bin opt buildpack tools python bin home ubuntu npm global bin home ubuntu cargo bin home ubuntu bin usr local sbin usr local bin usr sbin usr bin sbin bin lc all c utf lang c utf buildpack cache dir tmp renovate cache containerbase containerbase cache dir tmp renovate cache containerbase maxbuffer timeout debug exec completed repository branch renovate x durationms stdout tool helm is already installed ntool is already linked helm ninstalled usr local buildpack tools helm sh in seconds nskip cleanup not a docker build log n stderr debug executing command repository branch renovate x command helm repo add stable registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories trace command options repository branch renovate x commandoptions cwd tmp renovate repos github encoding utf env helm experimental oci home home ubuntu path home ubuntu local bin go bin home ubuntu bin opt buildpack tools python bin home ubuntu npm global bin home ubuntu cargo bin home ubuntu bin usr local sbin usr local bin usr sbin usr bin sbin bin lc all c utf lang c utf buildpack cache dir tmp renovate cache containerbase containerbase cache dir tmp renovate cache containerbase maxbuffer timeout debug exec completed repository branch renovate x durationms stdout stable has been added to your repositories n stderr debug executing command repository branch renovate x command helm dependency update registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories trace command options repository branch renovate x commandoptions cwd tmp renovate repos github encoding utf env helm experimental oci home home ubuntu path home ubuntu local bin go bin home ubuntu bin opt buildpack tools python bin home ubuntu npm global bin home ubuntu cargo bin home ubuntu bin usr local sbin usr local bin usr sbin usr bin sbin bin lc all c utf lang c utf buildpack cache dir tmp renovate cache containerbase containerbase cache dir tmp renovate cache containerbase maxbuffer timeout debug rawexec err repository branch renovate x err name execerror cmd bin sh c helm dependency update registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories stderr error could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n stdout hang tight while we grab the latest from your chart repositories n successfully got an update from the stable chart repository nupdate complete ⎈happy helming ⎈ nsaving charts ndownloading from repo oci dkr ecr ca central amazonaws com nsave error occurred could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n options cwd tmp renovate repos github encoding utf env helm experimental oci home home ubuntu path home ubuntu local bin go bin home ubuntu bin opt buildpack tools python bin home ubuntu npm global bin home ubuntu cargo bin home ubuntu bin usr local sbin usr local bin usr sbin usr bin sbin bin lc all c utf lang c utf buildpack cache dir tmp renovate cache containerbase containerbase cache dir tmp renovate cache containerbase maxbuffer timeout exitcode message command failed helm dependency update registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories nerror could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n stack execerror command failed helm dependency update registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories nerror could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n n at childprocess usr src app node modules renovate lib util exec common ts n at childprocess emit node events n at childprocess emit node domain n at process childprocess handle onexit node internal child process debug failed to update helm lock file repository branch renovate x err name execerror cmd bin sh c helm dependency update registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories stderr error could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n stdout hang tight while we grab the latest from your chart repositories n successfully got an update from the stable chart repository nupdate complete ⎈happy helming ⎈ nsaving charts ndownloading from repo oci dkr ecr ca central amazonaws com nsave error occurred could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n options cwd tmp renovate repos github encoding utf env helm experimental oci home home ubuntu path home ubuntu local bin go bin home ubuntu bin opt buildpack tools python bin home ubuntu npm global bin home ubuntu cargo bin home ubuntu bin usr local sbin usr local bin usr sbin usr bin sbin bin lc all c utf lang c utf buildpack cache dir tmp renovate cache containerbase containerbase cache dir tmp renovate cache containerbase maxbuffer timeout exitcode message command failed helm dependency update registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories nerror could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n stack execerror command failed helm dependency update registry config tmp renovate cache renovate private cache registry json repository config tmp renovate cache renovate private cache repositories yaml repository cache tmp renovate cache renovate private cache repositories nerror could not download oci dkr ecr ca central amazonaws com pulling from host dkr ecr ca central amazonaws com failed with status code unauthorized n n at childprocess usr src app node modules renovate lib util exec common ts n at childprocess emit node events n at childprocess emit node domain n at process childprocess handle onexit node internal child process debug updated package files repository branch renovate x have you created a minimal reproduction repository no | 1 |
193,596 | 6,886,318,064 | IssuesEvent | 2017-11-21 19:02:41 | andresriancho/w3af | https://api.github.com/repos/andresriancho/w3af | closed | Race condition when updating InfoSet | core plugin priority:medium | ```
A "DBException" exception was found while running grep.cross_domain_js
on "Method: GET | http://domain/gears_config.gears". The exception was:
"Failed to update() CrossDomainInfoSet instance because the original
unique_id (be73656f-cf35-4d37-9831-ca8ce7ad3e76) does not exist in the
DB, or the new unique_id (be73656f-cf35-4d37-9831-ca8ce7ad3e76) is
invalid." at knowledge_base.py:update():510.The full traceback is:
File
"/usr/local/w3af/w3af/core/controllers/core_helpers/consumers/grep.py",
line 151, in _consume
plugin.grep_wrapper(request, response)
File "/usr/local/w3af/w3af/core/controllers/plugins/grep_plugin.py",
line 55, in grep_wrapper
self.grep(fuzzable_request, response)
File "/usr/local/w3af/w3af/plugins/grep/cross_domain_js.py", line 83,
in grep
self._analyze_domain(response, script_full_url, tag)
File "/usr/local/w3af/w3af/plugins/grep/cross_domain_js.py", line 120,
in _analyze_domain
group_klass=CrossDomainInfoSet)
File "/usr/local/w3af/w3af/core/controllers/plugins/plugin.py", line
158, in kb_append_uniq_group
group_klass=group_klass)
File "/usr/local/w3af/w3af/core/data/kb/knowledge_base.py", line 160,
in append_uniq_group
self.update(old_info_set, info_set)
File "/usr/local/w3af/w3af/core/data/kb/knowledge_base.py", line 282,
in decorated
return _method(self, *args, **kwargs)
File "/usr/local/w3af/w3af/core/data/kb/knowledge_base.py", line 510,
in update
new_uniq_id))
``` | 1.0 | Race condition when updating InfoSet - ```
A "DBException" exception was found while running grep.cross_domain_js
on "Method: GET | http://domain/gears_config.gears". The exception was:
"Failed to update() CrossDomainInfoSet instance because the original
unique_id (be73656f-cf35-4d37-9831-ca8ce7ad3e76) does not exist in the
DB, or the new unique_id (be73656f-cf35-4d37-9831-ca8ce7ad3e76) is
invalid." at knowledge_base.py:update():510.The full traceback is:
File
"/usr/local/w3af/w3af/core/controllers/core_helpers/consumers/grep.py",
line 151, in _consume
plugin.grep_wrapper(request, response)
File "/usr/local/w3af/w3af/core/controllers/plugins/grep_plugin.py",
line 55, in grep_wrapper
self.grep(fuzzable_request, response)
File "/usr/local/w3af/w3af/plugins/grep/cross_domain_js.py", line 83,
in grep
self._analyze_domain(response, script_full_url, tag)
File "/usr/local/w3af/w3af/plugins/grep/cross_domain_js.py", line 120,
in _analyze_domain
group_klass=CrossDomainInfoSet)
File "/usr/local/w3af/w3af/core/controllers/plugins/plugin.py", line
158, in kb_append_uniq_group
group_klass=group_klass)
File "/usr/local/w3af/w3af/core/data/kb/knowledge_base.py", line 160,
in append_uniq_group
self.update(old_info_set, info_set)
File "/usr/local/w3af/w3af/core/data/kb/knowledge_base.py", line 282,
in decorated
return _method(self, *args, **kwargs)
File "/usr/local/w3af/w3af/core/data/kb/knowledge_base.py", line 510,
in update
new_uniq_id))
``` | priority | race condition when updating infoset a dbexception exception was found while running grep cross domain js on method get the exception was failed to update crossdomaininfoset instance because the original unique id does not exist in the db or the new unique id is invalid at knowledge base py update the full traceback is file usr local core controllers core helpers consumers grep py line in consume plugin grep wrapper request response file usr local core controllers plugins grep plugin py line in grep wrapper self grep fuzzable request response file usr local plugins grep cross domain js py line in grep self analyze domain response script full url tag file usr local plugins grep cross domain js py line in analyze domain group klass crossdomaininfoset file usr local core controllers plugins plugin py line in kb append uniq group group klass group klass file usr local core data kb knowledge base py line in append uniq group self update old info set info set file usr local core data kb knowledge base py line in decorated return method self args kwargs file usr local core data kb knowledge base py line in update new uniq id | 1 |
301,743 | 9,223,476,052 | IssuesEvent | 2019-03-12 03:38:35 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | FATAL ERROR: unknown key "posixpath" in format string "{posixpath}" | area: west bug priority: medium | Commit 7e9d1bdda4a50dc66b6429739ff487a5bc959476 has introduced the following message on a build:
`FATAL ERROR: unknown key "posixpath" in format string "{posixpath}"`
I'm guessing this is from west version not being up to date, but its unclear to me how this is suppose to work. Do we need a minimum version of west installed and checked for? | 1.0 | FATAL ERROR: unknown key "posixpath" in format string "{posixpath}" - Commit 7e9d1bdda4a50dc66b6429739ff487a5bc959476 has introduced the following message on a build:
`FATAL ERROR: unknown key "posixpath" in format string "{posixpath}"`
I'm guessing this is from west version not being up to date, but its unclear to me how this is suppose to work. Do we need a minimum version of west installed and checked for? | priority | fatal error unknown key posixpath in format string posixpath commit has introduced the following message on a build fatal error unknown key posixpath in format string posixpath i m guessing this is from west version not being up to date but its unclear to me how this is suppose to work do we need a minimum version of west installed and checked for | 1 |
663,511 | 22,195,523,293 | IssuesEvent | 2022-06-07 06:27:20 | kubesphere/console | https://api.github.com/repos/kubesphere/console | closed | Do not translate "ConfigMap" into "配置" and "Secret" into "密钥". | help wanted area/console kind/bug kind/need-to-verify priority/medium | 
"密钥" means a private key or public key used to encrypt or decrypt certain information.
"配置" means a software or hardware configuration.
Both are too general and misleading. ConfigMap and Secret are concepts unique to Kubernetes, and we should not translate them.
/priority Medium
/milestone 3.2.0
/area console | 1.0 | Do not translate "ConfigMap" into "配置" and "Secret" into "密钥". - 
"密钥" means a private key or public key used to encrypt or decrypt certain information.
"配置" means a software or hardware configuration.
Both are too general and misleading. ConfigMap and Secret are concepts unique to Kubernetes, and we should not translate them.
/priority Medium
/milestone 3.2.0
/area console | priority | do not translate configmap into 配置 and secret into 密钥 密钥 means a private key or public key used to encrypt or decrypt certain information 配置 means a software or hardware configuration both are too general and misleading configmap and secret are concepts unique to kubernetes and we should not translate them priority medium milestone area console | 1 |
447,313 | 12,887,615,974 | IssuesEvent | 2020-07-13 11:34:16 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | The footer of the reset email template is not centered | Has-PR bug priority: medium | **Describe the bug**
The footer of the reset email template is not centered
**To Reproduce**
Steps to reproduce the behavior:
1. Try to use the forget password
2. Check the footer of the email you just received, you will notice that it is not centered compared with other BuddyBoss email templates
**Expected behavior**
it should be centered
**Screenshots**


**Support ticket links**
https://secure.helpscout.net/conversation/1169469813/74179
| 1.0 | The footer of the reset email template is not centered - **Describe the bug**
The footer of the reset email template is not centered
**To Reproduce**
Steps to reproduce the behavior:
1. Try to use the forget password
2. Check the footer of the email you just received, you will notice that it is not centered compared with other BuddyBoss email templates
**Expected behavior**
it should be centered
**Screenshots**


**Support ticket links**
https://secure.helpscout.net/conversation/1169469813/74179
| priority | the footer of the reset email template is not centered describe the bug the footer of the reset email template is not centered to reproduce steps to reproduce the behavior try to use the forget password check the footer of the email you just received you will notice that it is not centered compared with other buddyboss email templates expected behavior it should be centered screenshots support ticket links | 1 |
464,618 | 13,336,699,468 | IssuesEvent | 2020-08-28 08:00:32 | apluslms/a-plus | https://api.github.com/repos/apluslms/a-plus | opened | Implement a rejected state in the async assessment post | area: API area: grading interface effort: days experience: beginner priority: medium type: feature | See the definition of the `error` field the protocol documentation https://apluslms.github.io/protocols/aplus-assess-v1/#step-23-asynchronous-update | 1.0 | Implement a rejected state in the async assessment post - See the definition of the `error` field the protocol documentation https://apluslms.github.io/protocols/aplus-assess-v1/#step-23-asynchronous-update | priority | implement a rejected state in the async assessment post see the definition of the error field the protocol documentation | 1 |
317,780 | 9,669,232,760 | IssuesEvent | 2019-05-21 16:51:14 | x-klanas/Wrath | https://api.github.com/repos/x-klanas/Wrath | closed | Functional tool: screwdriver - screwing | 3 points medium priority user story | As a player I want the screwdriver to be capable of screwing parts into place.
- [x] There must be a screwdriver prefab
- [x] The screwdriver must snap to screws
- [x] If the screwdriver is snapped onto a screw, while holding a button on the controller, the screw driver should be screwing the screw
- [x] When pressing a button on the controller, the screwing direction (in or out) must change | 1.0 | Functional tool: screwdriver - screwing - As a player I want the screwdriver to be capable of screwing parts into place.
- [x] There must be a screwdriver prefab
- [x] The screwdriver must snap to screws
- [x] If the screwdriver is snapped onto a screw, while holding a button on the controller, the screw driver should be screwing the screw
- [x] When pressing a button on the controller, the screwing direction (in or out) must change | priority | functional tool screwdriver screwing as a player i want the screwdriver to be capable of screwing parts into place there must be a screwdriver prefab the screwdriver must snap to screws if the screwdriver is snapped onto a screw while holding a button on the controller the screw driver should be screwing the screw when pressing a button on the controller the screwing direction in or out must change | 1 |
205,511 | 7,102,758,291 | IssuesEvent | 2018-01-16 00:14:59 | davide-romanini/comictagger | https://api.github.com/repos/davide-romanini/comictagger | closed | Python Imaging library is not available and is needed for issue identification... | Priority-Medium bug imported | _From [murf...@gmail.com](https://code.google.com/u/116469566441608387730/) on June 27, 2014 18:36:45_
What version of ComicTagger are you using? 1.1.15-beta On what operating system (Mac, Linux, Windows)? What version? linux GUI or command line? gui What steps will reproduce the problem? 1.Open folder filled with comics
2.select a comic from list on right
3.click auto-tag or auto-identify What is the expected output? What do you see instead? It gives an error when the auto-identify button is clicked and then the matches are displayed in the search box behind it. I can then manually click the OK button to load the tags. If I use auto-tag then the error is displayed in the cli as the following.
Python Imaging Library (PIL) is not available and is needed for issue identification.
Online search: No match found. Save aborted
#
Auto-Tagging 28 of 100 Please provide any additional information below. PIL is installed properly on my system. Also, all other dependencies are installed as per the instructions.
_Original issue: http://code.google.com/p/comictagger/issues/detail?id=54_
| 1.0 | Python Imaging library is not available and is needed for issue identification... - _From [murf...@gmail.com](https://code.google.com/u/116469566441608387730/) on June 27, 2014 18:36:45_
What version of ComicTagger are you using? 1.1.15-beta On what operating system (Mac, Linux, Windows)? What version? linux GUI or command line? gui What steps will reproduce the problem? 1.Open folder filled with comics
2.select a comic from list on right
3.click auto-tag or auto-identify What is the expected output? What do you see instead? It gives an error when the auto-identify button is clicked and then the matches are displayed in the search box behind it. I can then manually click the OK button to load the tags. If I use auto-tag then the error is displayed in the cli as the following.
Python Imaging Library (PIL) is not available and is needed for issue identification.
Online search: No match found. Save aborted
#
Auto-Tagging 28 of 100 Please provide any additional information below. PIL is installed properly on my system. Also, all other dependencies are installed as per the instructions.
_Original issue: http://code.google.com/p/comictagger/issues/detail?id=54_
| priority | python imaging library is not available and is needed for issue identification from on june what version of comictagger are you using beta on what operating system mac linux windows what version linux gui or command line gui what steps will reproduce the problem open folder filled with comics select a comic from list on right click auto tag or auto identify what is the expected output what do you see instead it gives an error when the auto identify button is clicked and then the matches are displayed in the search box behind it i can then manually click the ok button to load the tags if i use auto tag then the error is displayed in the cli as the following python imaging library pil is not available and is needed for issue identification online search no match found save aborted auto tagging of please provide any additional information below pil is installed properly on my system also all other dependencies are installed as per the instructions original issue | 1 |
55,353 | 3,073,024,825 | IssuesEvent | 2015-08-19 19:51:05 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Test cases hangs in case of application crash. | bug imported Priority-Medium wontfix | _From [vaibha...@gmail.com](https://code.google.com/u/110283323986568949848/) on August 07, 2012 10:38:18_
What steps will reproduce the problem? 1.I have multiple test cases.
2.I execute normally 4 tests at once, Now while the test is executing, sometimes application crashes i.e. a pop is shown "Application is not responding. Would you like to close it?" with Wait and Ok button.
3.Now due to this remaining test cases just hangs and doesn't do anything. What is the expected output? What do you see instead? Is there any way to execute the remaining test cases by force stopping the application whenever this issue occurs? or atleast fail the rest of test cases and preventing the hanging of test case. What version of the product are you using? On what operating system? Robotium 3.4(Update today only. Thanks Renas for the new wonderful release!!)
Thanks,
Vaibhav
_Original issue: http://code.google.com/p/robotium/issues/detail?id=303_ | 1.0 | Test cases hangs in case of application crash. - _From [vaibha...@gmail.com](https://code.google.com/u/110283323986568949848/) on August 07, 2012 10:38:18_
What steps will reproduce the problem? 1.I have multiple test cases.
2.I execute normally 4 tests at once, Now while the test is executing, sometimes application crashes i.e. a pop is shown "Application is not responding. Would you like to close it?" with Wait and Ok button.
3.Now due to this remaining test cases just hangs and doesn't do anything. What is the expected output? What do you see instead? Is there any way to execute the remaining test cases by force stopping the application whenever this issue occurs? or atleast fail the rest of test cases and preventing the hanging of test case. What version of the product are you using? On what operating system? Robotium 3.4(Update today only. Thanks Renas for the new wonderful release!!)
Thanks,
Vaibhav
_Original issue: http://code.google.com/p/robotium/issues/detail?id=303_ | priority | test cases hangs in case of application crash from on august what steps will reproduce the problem i have multiple test cases i execute normally tests at once now while the test is executing sometimes application crashes i e a pop is shown application is not responding would you like to close it with wait and ok button now due to this remaining test cases just hangs and doesn t do anything what is the expected output what do you see instead is there any way to execute the remaining test cases by force stopping the application whenever this issue occurs or atleast fail the rest of test cases and preventing the hanging of test case what version of the product are you using on what operating system robotium update today only thanks renas for the new wonderful release thanks vaibhav original issue | 1 |
85,016 | 3,683,792,339 | IssuesEvent | 2016-02-24 15:18:16 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | opened | Currency: triggers on "mars" because of `m` prefix + `ars` symbol | Bug Priority: Medium Relevancy Triggering | It looks like we have a regex that checks for `million` but also `m`. As well `ars` is a currency name.
So together, `mars` looks like a valid trigger. This also happens for `musd` for example.
I think if the trigger contains `million` or `m` it should also have a number, or we may want to whitelist the phrase `mars`.
@laouji @MrChrisW, what are your thoughts?
------
IA Page: http://duck.co/ia/view/currency | 1.0 | Currency: triggers on "mars" because of `m` prefix + `ars` symbol - It looks like we have a regex that checks for `million` but also `m`. As well `ars` is a currency name.
So together, `mars` looks like a valid trigger. This also happens for `musd` for example.
I think if the trigger contains `million` or `m` it should also have a number, or we may want to whitelist the phrase `mars`.
@laouji @MrChrisW, what are your thoughts?
------
IA Page: http://duck.co/ia/view/currency | priority | currency triggers on mars because of m prefix ars symbol it looks like we have a regex that checks for million but also m as well ars is a currency name so together mars looks like a valid trigger this also happens for musd for example i think if the trigger contains million or m it should also have a number or we may want to whitelist the phrase mars laouji mrchrisw what are your thoughts ia page | 1 |
56,967 | 3,081,223,598 | IssuesEvent | 2015-08-22 14:09:31 | bitfighter/bitfighter | https://api.github.com/repos/bitfighter/bitfighter | closed | Add 'owner' auth level in-game | enhancement imported Priority-Medium | _From [buckyballreaction](https://code.google.com/u/buckyballreaction/) on April 08, 2013 14:50:38_
Add new level for in-game authentication: owner.
Owners will have admin power but can do things that admins can't (like kick other admins)
Also, make it so only owners can change the admin password; and if changed, all admins are demoted immediately (like how changing levelchange password works)
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=195_ | 1.0 | Add 'owner' auth level in-game - _From [buckyballreaction](https://code.google.com/u/buckyballreaction/) on April 08, 2013 14:50:38_
Add new level for in-game authentication: owner.
Owners will have admin power but can do things that admins can't (like kick other admins)
Also, make it so only owners can change the admin password; and if changed, all admins are demoted immediately (like how changing levelchange password works)
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=195_ | priority | add owner auth level in game from on april add new level for in game authentication owner owners will have admin power but can do things that admins can t like kick other admins also make it so only owners can change the admin password and if changed all admins are demoted immediately like how changing levelchange password works original issue | 1 |
145,022 | 5,557,217,142 | IssuesEvent | 2017-03-24 11:22:49 | Alfresco/alfresco-sdk | https://api.github.com/repos/Alfresco/alfresco-sdk | closed | The AlfrescoPerson does not work in Maven SDK 1.1.1 | bug imported Priority-Medium | _From [m.swe...@aca-it.be](https://code.google.com/u/103056205031204277679/) on December 02, 2013 10:30:31_
What steps will reproduce the problem? 1. Create a fresh project with the Maven SDK 1.1.1
2. Create a test using the AlfrescoPerson rule to create a temporary user
3. The test will fail, because it cannot find the bean "testUserComponent". What is the expected output? What do you see instead? The rule should still work (just like in Maven SDK version 1.0.2), but it complains about the bean that cannot be found. What version of the product are you using? On what operating system? Maven SDK 1.1.1. on OSX with Maven 3.1.1 and Alfresco 4.2.e. Please provide any additional information below. The bean is defined in the file "community-integration-test-context.xml". I added it to my resources and referenced it in the @ContextConfiguration annotation to work around this problem. However this should work out of the box.
I also noticed that it is difficult to make it work out of the box, because it is hard getting to the application context before the JUnit rule is initialised. To fix it:
- My test implements ApplicationContextAware
- I have a custom ApplicationContextInit class which has a setter for the ApplicationContext and a simple getter. I instantiate it as a private variable on my test class.
- I pass the ApplicationContext within my test in the setApplicationContext method provided by the ApplicationContextAware interface
- I am then able to pass the ApplicationContextInit instance to the AlfrescoPerson rule.
This is also necessary to be able to use the TemporaryNodes rule. I feel that it shouldn't be so hard trying to make the default test utility classes to work.
_Original issue: http://code.google.com/p/maven-alfresco-archetypes/issues/detail?id=168_
| 1.0 | The AlfrescoPerson does not work in Maven SDK 1.1.1 - _From [m.swe...@aca-it.be](https://code.google.com/u/103056205031204277679/) on December 02, 2013 10:30:31_
What steps will reproduce the problem? 1. Create a fresh project with the Maven SDK 1.1.1
2. Create a test using the AlfrescoPerson rule to create a temporary user
3. The test will fail, because it cannot find the bean "testUserComponent". What is the expected output? What do you see instead? The rule should still work (just like in Maven SDK version 1.0.2), but it complains about the bean that cannot be found. What version of the product are you using? On what operating system? Maven SDK 1.1.1. on OSX with Maven 3.1.1 and Alfresco 4.2.e. Please provide any additional information below. The bean is defined in the file "community-integration-test-context.xml". I added it to my resources and referenced it in the @ContextConfiguration annotation to work around this problem. However this should work out of the box.
I also noticed that it is difficult to make it work out of the box, because it is hard getting to the application context before the JUnit rule is initialised. To fix it:
- My test implements ApplicationContextAware
- I have a custom ApplicationContextInit class which has a setter for the ApplicationContext and a simple getter. I instantiate it as a private variable on my test class.
- I pass the ApplicationContext within my test in the setApplicationContext method provided by the ApplicationContextAware interface
- I am then able to pass the ApplicationContextInit instance to the AlfrescoPerson rule.
This is also necessary to be able to use the TemporaryNodes rule. I feel that it shouldn't be so hard trying to make the default test utility classes to work.
_Original issue: http://code.google.com/p/maven-alfresco-archetypes/issues/detail?id=168_
| priority | the alfrescoperson does not work in maven sdk from on december what steps will reproduce the problem create a fresh project with the maven sdk create a test using the alfrescoperson rule to create a temporary user the test will fail because it cannot find the bean testusercomponent what is the expected output what do you see instead the rule should still work just like in maven sdk version but it complains about the bean that cannot be found what version of the product are you using on what operating system maven sdk on osx with maven and alfresco e please provide any additional information below the bean is defined in the file community integration test context xml i added it to my resources and referenced it in the contextconfiguration annotation to work around this problem however this should work out of the box i also noticed that it is difficult to make it work out of the box because it is hard getting to the application context before the junit rule is initialised to fix it my test implements applicationcontextaware i have a custom applicationcontextinit class which has a setter for the applicationcontext and a simple getter i instantiate it as a private variable on my test class i pass the applicationcontext within my test in the setapplicationcontext method provided by the applicationcontextaware interface i am then able to pass the applicationcontextinit instance to the alfrescoperson rule this is also necessary to be able to use the temporarynodes rule i feel that it shouldn t be so hard trying to make the default test utility classes to work original issue | 1 |
704,822 | 24,209,869,279 | IssuesEvent | 2022-09-25 18:45:49 | COS301-SE-2022/Office-Booker | https://api.github.com/repos/COS301-SE-2022/Office-Booker | closed | Preview of generated svg doesn't represent actual size when dropped in office maker | Type: Bug Priority: Medium Status: Busy Type: Cosmetic | Created a 200 x 500 meeting room, the component generated outside the office maker is not the same size as when its dropped in | 1.0 | Preview of generated svg doesn't represent actual size when dropped in office maker - Created a 200 x 500 meeting room, the component generated outside the office maker is not the same size as when its dropped in | priority | preview of generated svg doesn t represent actual size when dropped in office maker created a x meeting room the component generated outside the office maker is not the same size as when its dropped in | 1 |
386,093 | 11,431,736,724 | IssuesEvent | 2020-02-04 12:47:22 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | closed | Allow setting a tag to highlight keywords with the tag via query string to docs generated with libdoc | beta 2 enhancement priority: medium | Scenario:
Libdoc generated keyword documation has ability to scope to certain tags within generated html documentation. When adding plugins (or extending existing libraries) it would be helpful for the reading to automatically highlight those added keywords. Currently, this can be done by selecting "tags" in the generated documentation but this selection process cannot be done directly by linking to the keyword document.
Proposal:
add a bit of javascript code into libdoc.html template that has querystring handler that checks if certain tag should be highlighted.
| 1.0 | Allow setting a tag to highlight keywords with the tag via query string to docs generated with libdoc - Scenario:
Libdoc generated keyword documation has ability to scope to certain tags within generated html documentation. When adding plugins (or extending existing libraries) it would be helpful for the reading to automatically highlight those added keywords. Currently, this can be done by selecting "tags" in the generated documentation but this selection process cannot be done directly by linking to the keyword document.
Proposal:
add a bit of javascript code into libdoc.html template that has querystring handler that checks if certain tag should be highlighted.
| priority | allow setting a tag to highlight keywords with the tag via query string to docs generated with libdoc scenario libdoc generated keyword documation has ability to scope to certain tags within generated html documentation when adding plugins or extending existing libraries it would be helpful for the reading to automatically highlight those added keywords currently this can be done by selecting tags in the generated documentation but this selection process cannot be done directly by linking to the keyword document proposal add a bit of javascript code into libdoc html template that has querystring handler that checks if certain tag should be highlighted | 1 |
283,944 | 8,728,443,923 | IssuesEvent | 2018-12-10 17:23:44 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Can not add member role to a user from the organizations page. | component:ui flag:🎱 priority:medium state:needs_devel | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- UI
##### SUMMARY
Could previously add user via member to an org. Can't do that now.
##### ENVIRONMENT
* AWX version: 2.1.1
##### STEPS TO REPRODUCE
* Create an org and user
* Go to org and add a user
* Try to add the user via the member role
##### EXPECTED RESULTS
* For the member role to be in the list
##### ACTUAL RESULTS
* not in the list
##### ADDITIONAL INFORMATION
* older versions of AWX had 11 permissions in that list. There are now 9.
* Also, the user list doesn't seem quite right. Users that have any permission end up in that list, rather than users with `member` role.


| 1.0 | Can not add member role to a user from the organizations page. - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- UI
##### SUMMARY
Could previously add user via member to an org. Can't do that now.
##### ENVIRONMENT
* AWX version: 2.1.1
##### STEPS TO REPRODUCE
* Create an org and user
* Go to org and add a user
* Try to add the user via the member role
##### EXPECTED RESULTS
* For the member role to be in the list
##### ACTUAL RESULTS
* not in the list
##### ADDITIONAL INFORMATION
* older versions of AWX had 11 permissions in that list. There are now 9.
* Also, the user list doesn't seem quite right. Users that have any permission end up in that list, rather than users with `member` role.


| priority | can not add member role to a user from the organizations page issue type bug report component name ui summary could previously add user via member to an org can t do that now environment awx version steps to reproduce create an org and user go to org and add a user try to add the user via the member role expected results for the member role to be in the list actual results not in the list additional information older versions of awx had permissions in that list there are now also the user list doesn t seem quite right users that have any permission end up in that list rather than users with member role | 1 |
679,283 | 23,226,411,821 | IssuesEvent | 2022-08-03 00:57:53 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] Race condition in YbPgMemUpdateMax | kind/bug area/ysql priority/medium | Jira Link: [DB-3041](https://yugabyte.atlassian.net/browse/DB-3041)
### Description
Jenkins logs: https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-centos-master-clang12-tsan/612/artifact/java/yb-pgsql/target/surefire-reports_org.yb.pgsql.TestPgConnection__testConnectionKills/
Detective link: https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=org.yb.pgsql.TestPgConnection&fail_tag=tsan&name=testConnectionKills&platform=all
```
m1|pid20919|:13399 WARNING: ThreadSanitizer: data race (pid=21059)
m1|pid20919|:13399 Write of size 8 at 0x0000018d4108 by thread T4:
m1|pid20919|:13399 #0 YbPgMemUpdateMax ${YB_SRC_ROOT}/src/postgres/src/backend/utils/mmgr/../../../../../../../src/postgres/src/backend/utils/mmgr/mcxt.c:50:37 (postgres+0xd670f6)
m1|pid20919|:13399 #1 decltype(std::__1::forward<void (*&)()>(fp)()) std::__1::__invoke<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3694:1 (libyb_pggate.so+0x1ece76)
m1|pid20919|:13399 #2 void std::__1::__invoke_void_return_wrapper<void, true>::__call<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__functional_base:348:9 (libyb_pggate.so+0x1ece01)
m1|pid20919|:13399 #3 std::__1::__function::__alloc_func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1558:16 (libyb_pggate.so+0x1ecdc1)
m1|pid20919|:13399 #4 std::__1::__function::__func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1732:12 (libyb_pggate.so+0x1eba2d)
m1|pid20919|:13399 #5 std::__1::__function::__value_func<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1885:16 (libserver_process.so+0x158464)
m1|pid20919|:13399 #6 std::__1::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2560:12 (libserver_process.so+0x157c69)
m1|pid20919|:13399 #7 yb::MemTracker::UpdateConsumption(bool) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:491:5 (libyb_util.so+0x5516e2)
m1|pid20919|:13399 #8 yb::MemTracker::Consume(long) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:528:19 (libyb_util.so+0x552547)
m1|pid20919|:13399 #9 yb::ScopedTrackedConsumption::ScopedTrackedConsumption(std::__1::shared_ptr<yb::MemTracker>, long, yb::StronglyTypedBool<yb::AlreadyConsumed_Tag>) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.h:538:17 (libyrpc.so+0x29f72f)
m1|pid20919|:13399 #10 yb::rpc::OutboundCall::SetRequestParam(yb::rpc::AnyMessageConstPtr, std::__1::shared_ptr<yb::MemTracker> const&) ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:247:27 (libyrpc.so+0x3177c7)
m1|pid20919|:13399 #11 yb::rpc::Proxy::PrepareCall(yb::rpc::AnyMessageConstPtr, yb::rpc::RpcController*) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:159:20 (libyrpc.so+0x32ca2f)
m1|pid20919|:13399 #12 yb::rpc::Proxy::AsyncRemoteCall(yb::rpc::RemoteMethod const*, std::__1::shared_ptr<yb::rpc::OutboundMethodMetrics const>, yb::rpc::AnyMessageConstPtr, yb::rpc::AnyMessagePtr, yb::rpc::RpcController*, std::__1::function<void ()>, bool) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:210:8 (libyrpc.so+0x32cf1e)
m1|pid20919|:13399 #13 yb::rpc::Proxy::DoAsyncRequest(yb::rpc::RemoteMethod const*, std::__1::shared_ptr<yb::rpc::OutboundMethodMetrics const>, yb::rpc::AnyMessageConstPtr, yb::rpc::AnyMessagePtr, yb::rpc::RpcController*, std::__1::function<void ()>, bool) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:234:5 (libyrpc.so+0x32c700)
m1|pid20919|:13399 #14 yb::rpc::Proxy::AsyncRequest(yb::rpc::RemoteMethod const*, std::__1::shared_ptr<yb::rpc::OutboundMethodMetrics const>, google::protobuf::Message const&, google::protobuf::Message*, yb::rpc::RpcController*, std::__1::function<void ()>) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:124:3 (libyrpc.so+0x32c5bf)
m1|pid20919|:13399 #15 yb::tserver::PgClientServiceProxy::HeartbeatAsync(yb::tserver::PgHeartbeatRequestPB const&, yb::tserver::PgHeartbeatResponsePB*, yb::rpc::RpcController*, std::__1::function<void ()>) const ${BUILD_ROOT}/src/yb/tserver/pg_client.proxy.cc:556:11 (libpg_client_proto.so+0x295b53)
m1|pid20919|:13399 #16 yb::pggate::PgClient::Impl::Heartbeat(bool) ${BUILD_ROOT}/../../src/yb/yql/pggate/pg_client.cc:154:13 (libyb_pggate.so+0x219064)
...
m1|pid20919|:13399 Previous write of size 8 at 0x0000018d4108 by thread T6:
m1|pid20919|:13399 #0 YbPgMemUpdateMax ${YB_SRC_ROOT}/src/postgres/src/backend/utils/mmgr/../../../../../../../src/postgres/src/backend/utils/mmgr/mcxt.c:50:37 (postgres+0xd670f6)
m1|pid20919|:13399 #1 decltype(std::__1::forward<void (*&)()>(fp)()) std::__1::__invoke<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3694:1 (libyb_pggate.so+0x1ece76)
m1|pid20919|:13399 #2 void std::__1::__invoke_void_return_wrapper<void, true>::__call<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__functional_base:348:9 (libyb_pggate.so+0x1ece01)
m1|pid20919|:13399 #3 std::__1::__function::__alloc_func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1558:16 (libyb_pggate.so+0x1ecdc1)
m1|pid20919|:13399 #4 std::__1::__function::__func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1732:12 (libyb_pggate.so+0x1eba2d)
m1|pid20919|:13399 #5 std::__1::__function::__value_func<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1885:16 (libserver_process.so+0x158464)
m1|pid20919|:13399 #6 std::__1::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2560:12 (libserver_process.so+0x157c69)
m1|pid20919|:13399 #7 yb::MemTracker::UpdateConsumption(bool) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:491:5 (libyb_util.so+0x5516e2)
m1|pid20919|:13399 #8 yb::MemTracker::Release(long) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:613:19 (libyb_util.so+0x551ce8)
m1|pid20919|:13399 #9 yb::ScopedTrackedConsumption::~ScopedTrackedConsumption() ${BUILD_ROOT}/../../src/yb/util/mem_tracker.h:579:17 (libyrpc.so+0x29f10e)
m1|pid20919|:13399 #10 yb::rpc::TcpStreamSendingData::~TcpStreamSendingData() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.h:31:8 (libyrpc.so+0x3e3b1a)
m1|pid20919|:13399 #11 void std::__1::destroy_at<yb::rpc::TcpStreamSendingData>(yb::rpc::TcpStreamSendingData*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/base.h:118:13 (libyrpc.so+0x3e3ae9)
m1|pid20919|:13399 #12 void std::__1::allocator_traits<std::__1::allocator<yb::rpc::TcpStreamSendingData> >::destroy<yb::rpc::TcpStreamSendingData, void, void>(std::__1::allocator<yb::rpc::TcpStreamSendingData>&, yb::rpc::TcpStreamSendingData*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/allocator_traits.h:315:9 (libyrpc.so+0x3e3909)
m1|pid20919|:13399 #13 std::__1::deque<yb::rpc::TcpStreamSendingData, std::__1::allocator<yb::rpc::TcpStreamSendingData> >::pop_front() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/deque:2711:5 (libyrpc.so+0x3e2cc9)
m1|pid20919|:13399 #14 yb::rpc::TcpStream::PopSending() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:261:12 (libyrpc.so+0x3df7c1)
m1|pid20919|:13399 #15 yb::rpc::TcpStream::DoWrite() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:249:7 (libyrpc.so+0x3defd8)
m1|pid20919|:13399 #16 yb::rpc::TcpStream::TryWrite() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:157:17 (libyrpc.so+0x3de910)
m1|pid20919|:13399 #17 yb::rpc::Connection::OutboundQueued() ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:162:26 (libyrpc.so+0x2b6e2f)
m1|pid20919|:13399 #18 yb::rpc::Reactor::ProcessOutboundQueue() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:720:13 (libyrpc.so+0x33f2bd)
``` | 1.0 | [YSQL] Race condition in YbPgMemUpdateMax - Jira Link: [DB-3041](https://yugabyte.atlassian.net/browse/DB-3041)
### Description
Jenkins logs: https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-centos-master-clang12-tsan/612/artifact/java/yb-pgsql/target/surefire-reports_org.yb.pgsql.TestPgConnection__testConnectionKills/
Detective link: https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=org.yb.pgsql.TestPgConnection&fail_tag=tsan&name=testConnectionKills&platform=all
```
m1|pid20919|:13399 WARNING: ThreadSanitizer: data race (pid=21059)
m1|pid20919|:13399 Write of size 8 at 0x0000018d4108 by thread T4:
m1|pid20919|:13399 #0 YbPgMemUpdateMax ${YB_SRC_ROOT}/src/postgres/src/backend/utils/mmgr/../../../../../../../src/postgres/src/backend/utils/mmgr/mcxt.c:50:37 (postgres+0xd670f6)
m1|pid20919|:13399 #1 decltype(std::__1::forward<void (*&)()>(fp)()) std::__1::__invoke<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3694:1 (libyb_pggate.so+0x1ece76)
m1|pid20919|:13399 #2 void std::__1::__invoke_void_return_wrapper<void, true>::__call<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__functional_base:348:9 (libyb_pggate.so+0x1ece01)
m1|pid20919|:13399 #3 std::__1::__function::__alloc_func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1558:16 (libyb_pggate.so+0x1ecdc1)
m1|pid20919|:13399 #4 std::__1::__function::__func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1732:12 (libyb_pggate.so+0x1eba2d)
m1|pid20919|:13399 #5 std::__1::__function::__value_func<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1885:16 (libserver_process.so+0x158464)
m1|pid20919|:13399 #6 std::__1::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2560:12 (libserver_process.so+0x157c69)
m1|pid20919|:13399 #7 yb::MemTracker::UpdateConsumption(bool) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:491:5 (libyb_util.so+0x5516e2)
m1|pid20919|:13399 #8 yb::MemTracker::Consume(long) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:528:19 (libyb_util.so+0x552547)
m1|pid20919|:13399 #9 yb::ScopedTrackedConsumption::ScopedTrackedConsumption(std::__1::shared_ptr<yb::MemTracker>, long, yb::StronglyTypedBool<yb::AlreadyConsumed_Tag>) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.h:538:17 (libyrpc.so+0x29f72f)
m1|pid20919|:13399 #10 yb::rpc::OutboundCall::SetRequestParam(yb::rpc::AnyMessageConstPtr, std::__1::shared_ptr<yb::MemTracker> const&) ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:247:27 (libyrpc.so+0x3177c7)
m1|pid20919|:13399 #11 yb::rpc::Proxy::PrepareCall(yb::rpc::AnyMessageConstPtr, yb::rpc::RpcController*) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:159:20 (libyrpc.so+0x32ca2f)
m1|pid20919|:13399 #12 yb::rpc::Proxy::AsyncRemoteCall(yb::rpc::RemoteMethod const*, std::__1::shared_ptr<yb::rpc::OutboundMethodMetrics const>, yb::rpc::AnyMessageConstPtr, yb::rpc::AnyMessagePtr, yb::rpc::RpcController*, std::__1::function<void ()>, bool) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:210:8 (libyrpc.so+0x32cf1e)
m1|pid20919|:13399 #13 yb::rpc::Proxy::DoAsyncRequest(yb::rpc::RemoteMethod const*, std::__1::shared_ptr<yb::rpc::OutboundMethodMetrics const>, yb::rpc::AnyMessageConstPtr, yb::rpc::AnyMessagePtr, yb::rpc::RpcController*, std::__1::function<void ()>, bool) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:234:5 (libyrpc.so+0x32c700)
m1|pid20919|:13399 #14 yb::rpc::Proxy::AsyncRequest(yb::rpc::RemoteMethod const*, std::__1::shared_ptr<yb::rpc::OutboundMethodMetrics const>, google::protobuf::Message const&, google::protobuf::Message*, yb::rpc::RpcController*, std::__1::function<void ()>) ${BUILD_ROOT}/../../src/yb/rpc/proxy.cc:124:3 (libyrpc.so+0x32c5bf)
m1|pid20919|:13399 #15 yb::tserver::PgClientServiceProxy::HeartbeatAsync(yb::tserver::PgHeartbeatRequestPB const&, yb::tserver::PgHeartbeatResponsePB*, yb::rpc::RpcController*, std::__1::function<void ()>) const ${BUILD_ROOT}/src/yb/tserver/pg_client.proxy.cc:556:11 (libpg_client_proto.so+0x295b53)
m1|pid20919|:13399 #16 yb::pggate::PgClient::Impl::Heartbeat(bool) ${BUILD_ROOT}/../../src/yb/yql/pggate/pg_client.cc:154:13 (libyb_pggate.so+0x219064)
...
m1|pid20919|:13399 Previous write of size 8 at 0x0000018d4108 by thread T6:
m1|pid20919|:13399 #0 YbPgMemUpdateMax ${YB_SRC_ROOT}/src/postgres/src/backend/utils/mmgr/../../../../../../../src/postgres/src/backend/utils/mmgr/mcxt.c:50:37 (postgres+0xd670f6)
m1|pid20919|:13399 #1 decltype(std::__1::forward<void (*&)()>(fp)()) std::__1::__invoke<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3694:1 (libyb_pggate.so+0x1ece76)
m1|pid20919|:13399 #2 void std::__1::__invoke_void_return_wrapper<void, true>::__call<void (*&)()>(void (*&)()) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__functional_base:348:9 (libyb_pggate.so+0x1ece01)
m1|pid20919|:13399 #3 std::__1::__function::__alloc_func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1558:16 (libyb_pggate.so+0x1ecdc1)
m1|pid20919|:13399 #4 std::__1::__function::__func<void (*)(), std::__1::allocator<void (*)()>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1732:12 (libyb_pggate.so+0x1eba2d)
m1|pid20919|:13399 #5 std::__1::__function::__value_func<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1885:16 (libserver_process.so+0x158464)
m1|pid20919|:13399 #6 std::__1::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2560:12 (libserver_process.so+0x157c69)
m1|pid20919|:13399 #7 yb::MemTracker::UpdateConsumption(bool) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:491:5 (libyb_util.so+0x5516e2)
m1|pid20919|:13399 #8 yb::MemTracker::Release(long) ${BUILD_ROOT}/../../src/yb/util/mem_tracker.cc:613:19 (libyb_util.so+0x551ce8)
m1|pid20919|:13399 #9 yb::ScopedTrackedConsumption::~ScopedTrackedConsumption() ${BUILD_ROOT}/../../src/yb/util/mem_tracker.h:579:17 (libyrpc.so+0x29f10e)
m1|pid20919|:13399 #10 yb::rpc::TcpStreamSendingData::~TcpStreamSendingData() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.h:31:8 (libyrpc.so+0x3e3b1a)
m1|pid20919|:13399 #11 void std::__1::destroy_at<yb::rpc::TcpStreamSendingData>(yb::rpc::TcpStreamSendingData*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/base.h:118:13 (libyrpc.so+0x3e3ae9)
m1|pid20919|:13399 #12 void std::__1::allocator_traits<std::__1::allocator<yb::rpc::TcpStreamSendingData> >::destroy<yb::rpc::TcpStreamSendingData, void, void>(std::__1::allocator<yb::rpc::TcpStreamSendingData>&, yb::rpc::TcpStreamSendingData*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/allocator_traits.h:315:9 (libyrpc.so+0x3e3909)
m1|pid20919|:13399 #13 std::__1::deque<yb::rpc::TcpStreamSendingData, std::__1::allocator<yb::rpc::TcpStreamSendingData> >::pop_front() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220630123401-af96d73e39-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/deque:2711:5 (libyrpc.so+0x3e2cc9)
m1|pid20919|:13399 #14 yb::rpc::TcpStream::PopSending() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:261:12 (libyrpc.so+0x3df7c1)
m1|pid20919|:13399 #15 yb::rpc::TcpStream::DoWrite() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:249:7 (libyrpc.so+0x3defd8)
m1|pid20919|:13399 #16 yb::rpc::TcpStream::TryWrite() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:157:17 (libyrpc.so+0x3de910)
m1|pid20919|:13399 #17 yb::rpc::Connection::OutboundQueued() ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:162:26 (libyrpc.so+0x2b6e2f)
m1|pid20919|:13399 #18 yb::rpc::Reactor::ProcessOutboundQueue() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:720:13 (libyrpc.so+0x33f2bd)
``` | priority | race condition in ybpgmemupdatemax jira link description jenkins logs detective link warning threadsanitizer data race pid write of size at by thread ybpgmemupdatemax yb src root src postgres src backend utils mmgr src postgres src backend utils mmgr mcxt c postgres decltype std forward fp std invoke void opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c type traits libyb pggate so void std invoke void return wrapper call void opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional base libyb pggate so std function alloc func void operator opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyb pggate so std function func void operator opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyb pggate so std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libserver process so std function operator const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libserver process so yb memtracker updateconsumption bool build root src yb util mem tracker cc libyb util so yb memtracker consume long build root src yb util mem tracker cc libyb util so yb scopedtrackedconsumption scopedtrackedconsumption std shared ptr long yb stronglytypedbool build root src yb util mem tracker h libyrpc so yb rpc outboundcall setrequestparam yb rpc anymessageconstptr std shared ptr const build root src yb rpc outbound call cc libyrpc so yb rpc proxy preparecall yb rpc anymessageconstptr yb rpc rpccontroller build root src yb rpc proxy cc libyrpc so yb rpc proxy asyncremotecall yb rpc remotemethod const std shared ptr yb rpc anymessageconstptr yb rpc anymessageptr yb rpc rpccontroller std function bool build root src yb rpc proxy cc libyrpc so yb rpc proxy doasyncrequest yb rpc remotemethod const std shared ptr yb rpc anymessageconstptr yb rpc anymessageptr yb rpc rpccontroller std function bool build root src yb rpc proxy cc libyrpc so yb rpc proxy asyncrequest yb rpc remotemethod const std shared ptr google protobuf message const google protobuf message yb rpc rpccontroller std function build root src yb rpc proxy cc libyrpc so yb tserver pgclientserviceproxy heartbeatasync yb tserver pgheartbeatrequestpb const yb tserver pgheartbeatresponsepb yb rpc rpccontroller std function const build root src yb tserver pg client proxy cc libpg client proto so yb pggate pgclient impl heartbeat bool build root src yb yql pggate pg client cc libyb pggate so previous write of size at by thread ybpgmemupdatemax yb src root src postgres src backend utils mmgr src postgres src backend utils mmgr mcxt c postgres decltype std forward fp std invoke void opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c type traits libyb pggate so void std invoke void return wrapper call void opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional base libyb pggate so std function alloc func void operator opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyb pggate so std function func void operator opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyb pggate so std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libserver process so std function operator const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libserver process so yb memtracker updateconsumption bool build root src yb util mem tracker cc libyb util so yb memtracker release long build root src yb util mem tracker cc libyb util so yb scopedtrackedconsumption scopedtrackedconsumption build root src yb util mem tracker h libyrpc so yb rpc tcpstreamsendingdata tcpstreamsendingdata build root src yb rpc tcp stream h libyrpc so void std destroy at yb rpc tcpstreamsendingdata opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory base h libyrpc so void std allocator traits destroy std allocator yb rpc tcpstreamsendingdata opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory allocator traits h libyrpc so std deque pop front opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c deque libyrpc so yb rpc tcpstream popsending build root src yb rpc tcp stream cc libyrpc so yb rpc tcpstream dowrite build root src yb rpc tcp stream cc libyrpc so yb rpc tcpstream trywrite build root src yb rpc tcp stream cc libyrpc so yb rpc connection outboundqueued build root src yb rpc connection cc libyrpc so yb rpc reactor processoutboundqueue build root src yb rpc reactor cc libyrpc so | 1 |
811,126 | 30,275,448,805 | IssuesEvent | 2023-07-07 19:09:37 | vscentrum/vsc-software-stack | https://api.github.com/repos/vscentrum/vsc-software-stack | opened | ProBiS | difficulty: easy C/C++ new priority: medium site:ugent binaries | * link to support ticket: [#2023070660000615](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=124926)
* website: http://insilab.org/probis-algorithm + https://gitlab.com/janezkonc/probis
* installation docs: http://insilab.org/probis-algorithm/
* toolchain: `gompi/2022b`
* easyblock to use: `MakeCp`
* required dependencies:
* [x] GSL
* notes:
* no proper versioning, so use datestamp of latest commit in https://gitlab.com/janezkonc/probis
* override `CC` and `CFLAGS` via `buildopts`, since [makefile](https://gitlab.com/janezkonc/probis/-/blob/master/makefile) includes hardcoded stuff
* info provided by developer: `"If that doesn't work, in "const.h" try changing line 170: "#define WORD 32" to "#define WORD 64". This is definitely machine dependent bit"`
* test case is available in helpdesk ticket
* effort: *(TBD)*
* other install methods
* conda: no
* container image: no
* pre-built binaries (RHEL8 Linux x86_64): yes (http://insilab.org/probis-algorithm/)
* easyconfig outside EasyBuild: no
| 1.0 | ProBiS - * link to support ticket: [#2023070660000615](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=124926)
* website: http://insilab.org/probis-algorithm + https://gitlab.com/janezkonc/probis
* installation docs: http://insilab.org/probis-algorithm/
* toolchain: `gompi/2022b`
* easyblock to use: `MakeCp`
* required dependencies:
* [x] GSL
* notes:
* no proper versioning, so use datestamp of latest commit in https://gitlab.com/janezkonc/probis
* override `CC` and `CFLAGS` via `buildopts`, since [makefile](https://gitlab.com/janezkonc/probis/-/blob/master/makefile) includes hardcoded stuff
* info provided by developer: `"If that doesn't work, in "const.h" try changing line 170: "#define WORD 32" to "#define WORD 64". This is definitely machine dependent bit"`
* test case is available in helpdesk ticket
* effort: *(TBD)*
* other install methods
* conda: no
* container image: no
* pre-built binaries (RHEL8 Linux x86_64): yes (http://insilab.org/probis-algorithm/)
* easyconfig outside EasyBuild: no
| priority | probis link to support ticket website installation docs toolchain gompi easyblock to use makecp required dependencies gsl notes no proper versioning so use datestamp of latest commit in override cc and cflags via buildopts since includes hardcoded stuff info provided by developer if that doesn t work in const h try changing line define word to define word this is definitely machine dependent bit test case is available in helpdesk ticket effort tbd other install methods conda no container image no pre built binaries linux yes easyconfig outside easybuild no | 1 |
277,683 | 8,631,483,126 | IssuesEvent | 2018-11-22 07:52:20 | Qiskit/qiskit-terra | https://api.github.com/repos/Qiskit/qiskit-terra | closed | latex_source visualizer should not use transpiler(format='json') | priority: medium | We are going to remove the option `format='json'` from the transpiler (see #1129). For that, we need to move the visualizers into the function `get_instractions` introduced in PR #1187.
The function `_generate_latex_source` calls `transpile_dag(dag_circuit, basis_gates=basis, format='json')`. This needs to be removed in favor of `_utils.get_instructions(dag)`. | 1.0 | latex_source visualizer should not use transpiler(format='json') - We are going to remove the option `format='json'` from the transpiler (see #1129). For that, we need to move the visualizers into the function `get_instractions` introduced in PR #1187.
The function `_generate_latex_source` calls `transpile_dag(dag_circuit, basis_gates=basis, format='json')`. This needs to be removed in favor of `_utils.get_instructions(dag)`. | priority | latex source visualizer should not use transpiler format json we are going to remove the option format json from the transpiler see for that we need to move the visualizers into the function get instractions introduced in pr the function generate latex source calls transpile dag dag circuit basis gates basis format json this needs to be removed in favor of utils get instructions dag | 1 |
264,592 | 8,317,345,277 | IssuesEvent | 2018-09-25 11:49:14 | lnupmi11/PofCIS_Team1 | https://api.github.com/repos/lnupmi11/PofCIS_Team1 | closed | Implement Triangle class. | Task priority: medium | * Fields:
- A vertex coordinates
- B vertex coordinates
- C vertex coordinates
* Methods:
- readFile
- writeFile
- calcSquare
- calcPerimeter
Create some additional methods if needed. | 1.0 | Implement Triangle class. - * Fields:
- A vertex coordinates
- B vertex coordinates
- C vertex coordinates
* Methods:
- readFile
- writeFile
- calcSquare
- calcPerimeter
Create some additional methods if needed. | priority | implement triangle class fields a vertex coordinates b vertex coordinates c vertex coordinates methods readfile writefile calcsquare calcperimeter create some additional methods if needed | 1 |
275,729 | 8,579,640,511 | IssuesEvent | 2018-11-13 09:44:19 | Scifabric/pybossa | https://api.github.com/repos/Scifabric/pybossa | closed | Add tags table for more flexibility in organising projects | API priority.medium | What do you think about adding a basic tags/labels table so that projects can be grouped, and therefore searched, according to more flexible criteria. It's difficult to decide sometimes how to group things - so, in our new frontend we currently have projects grouped by volume, but they could equally be grouped by location, or type of task (transcribe, mark up etc.). The API is getting ever more flexible and it would be really cool if we could use something like [this](https://monterail.github.io/vue-multiselect/) to tag our projects and then allow people to filter via multiple tags.
Tags could be read by all but only created/updated/deleted by admins (new tags could be suggested via a forum or something, but that's outside of the scope of this). Tags wouldn't be linked to a particular category and could we even just start with say `id`, `name` and `info` (maybe a `hex` field for a custom colour would be cool, although it can go in the info otherwise). Happy to submit a PR but obviously want to see what you think first! | 1.0 | Add tags table for more flexibility in organising projects - What do you think about adding a basic tags/labels table so that projects can be grouped, and therefore searched, according to more flexible criteria. It's difficult to decide sometimes how to group things - so, in our new frontend we currently have projects grouped by volume, but they could equally be grouped by location, or type of task (transcribe, mark up etc.). The API is getting ever more flexible and it would be really cool if we could use something like [this](https://monterail.github.io/vue-multiselect/) to tag our projects and then allow people to filter via multiple tags.
Tags could be read by all but only created/updated/deleted by admins (new tags could be suggested via a forum or something, but that's outside of the scope of this). Tags wouldn't be linked to a particular category and could we even just start with say `id`, `name` and `info` (maybe a `hex` field for a custom colour would be cool, although it can go in the info otherwise). Happy to submit a PR but obviously want to see what you think first! | priority | add tags table for more flexibility in organising projects what do you think about adding a basic tags labels table so that projects can be grouped and therefore searched according to more flexible criteria it s difficult to decide sometimes how to group things so in our new frontend we currently have projects grouped by volume but they could equally be grouped by location or type of task transcribe mark up etc the api is getting ever more flexible and it would be really cool if we could use something like to tag our projects and then allow people to filter via multiple tags tags could be read by all but only created updated deleted by admins new tags could be suggested via a forum or something but that s outside of the scope of this tags wouldn t be linked to a particular category and could we even just start with say id name and info maybe a hex field for a custom colour would be cool although it can go in the info otherwise happy to submit a pr but obviously want to see what you think first | 1 |
308,203 | 9,436,170,530 | IssuesEvent | 2019-04-13 03:45:09 | briandgoldberg/Homeless-Poker | https://api.github.com/repos/briandgoldberg/Homeless-Poker | closed | Add a secret key to the contracts constructor. | Priority: Medium chore 🧹 web3 | The client generates a SHA string and passes it as an argument to the constructor of the contract when deploying.
This secret key will allow the deployer of the contract to the secret (e.g. as a link) to other participants.
Bad actors wouldn't be able to 51% attack the contract. | 1.0 | Add a secret key to the contracts constructor. - The client generates a SHA string and passes it as an argument to the constructor of the contract when deploying.
This secret key will allow the deployer of the contract to the secret (e.g. as a link) to other participants.
Bad actors wouldn't be able to 51% attack the contract. | priority | add a secret key to the contracts constructor the client generates a sha string and passes it as an argument to the constructor of the contract when deploying this secret key will allow the deployer of the contract to the secret e g as a link to other participants bad actors wouldn t be able to attack the contract | 1 |
710,283 | 24,413,291,200 | IssuesEvent | 2022-10-05 14:03:56 | vscentrum/vsc-software-stack | https://api.github.com/repos/vscentrum/vsc-software-stack | closed | YALES2 | difficulty: hard Fortran MPI priority: medium site:vub | * link to support ticket: INC0123241 of VUB
* website: https://www.coria-cfd.fr/index.php/YALES2
* installation docs: manual (PDF) in the source code
* toolchain: `foss/2021a`
* easyblock to use: `Bundle`
* required dependencies:
* [x] Python
* [x] CMake
* [x] git
* [x] HDF5
* [x] CWIPI
* [x] Hypre
* [x] METIS
* [x] mmg
* [x] PETSc
* [x] SLEPc
* [x] SCOTCH
* optional dependencies:
* [x] SciPy-bundle
* [x] matplotlib
* [x] f90wrap
* notes:
* The workflow for YALES2 is not suitable for a standard installation. Users have to compile their simulations from source code. So all we can do is provide a module with the necessary dependencies, configuration files and setup the environment properly to be able to build simulations.
* effort: *hard*
| 1.0 | YALES2 - * link to support ticket: INC0123241 of VUB
* website: https://www.coria-cfd.fr/index.php/YALES2
* installation docs: manual (PDF) in the source code
* toolchain: `foss/2021a`
* easyblock to use: `Bundle`
* required dependencies:
* [x] Python
* [x] CMake
* [x] git
* [x] HDF5
* [x] CWIPI
* [x] Hypre
* [x] METIS
* [x] mmg
* [x] PETSc
* [x] SLEPc
* [x] SCOTCH
* optional dependencies:
* [x] SciPy-bundle
* [x] matplotlib
* [x] f90wrap
* notes:
* The workflow for YALES2 is not suitable for a standard installation. Users have to compile their simulations from source code. So all we can do is provide a module with the necessary dependencies, configuration files and setup the environment properly to be able to build simulations.
* effort: *hard*
| priority | link to support ticket of vub website installation docs manual pdf in the source code toolchain foss easyblock to use bundle required dependencies python cmake git cwipi hypre metis mmg petsc slepc scotch optional dependencies scipy bundle matplotlib notes the workflow for is not suitable for a standard installation users have to compile their simulations from source code so all we can do is provide a module with the necessary dependencies configuration files and setup the environment properly to be able to build simulations effort hard | 1 |
369,984 | 10,923,730,913 | IssuesEvent | 2019-11-22 08:31:58 | BEXIS2/Core | https://api.github.com/repos/BEXIS2/Core | closed | User Story 21: As a user I would like to have a definition of missing values | Priority: Medium Status: Pending Type: Enhancement | have a definition of missing values, which is also recognized by the database. | 1.0 | User Story 21: As a user I would like to have a definition of missing values - have a definition of missing values, which is also recognized by the database. | priority | user story as a user i would like to have a definition of missing values have a definition of missing values which is also recognized by the database | 1 |
159,989 | 6,065,667,199 | IssuesEvent | 2017-06-14 16:43:03 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | Mock up fake diff data in imagec | component/docker-api-server component/imagec priority/medium | Acceptance criteria:
- [ ] Create mock data that can be used to work on image construction in imagec while we wait for the completion of docker diff.
---
Docker push cannot complete without docker diff. While we wait for the completion of docker diff, we can use docker save to create us an image tar and hand create diff layer tars to mock up the data that docker diff would have returned. This will allow docker push's image construction work to proceed. | 1.0 | Mock up fake diff data in imagec - Acceptance criteria:
- [ ] Create mock data that can be used to work on image construction in imagec while we wait for the completion of docker diff.
---
Docker push cannot complete without docker diff. While we wait for the completion of docker diff, we can use docker save to create us an image tar and hand create diff layer tars to mock up the data that docker diff would have returned. This will allow docker push's image construction work to proceed. | priority | mock up fake diff data in imagec acceptance criteria create mock data that can be used to work on image construction in imagec while we wait for the completion of docker diff docker push cannot complete without docker diff while we wait for the completion of docker diff we can use docker save to create us an image tar and hand create diff layer tars to mock up the data that docker diff would have returned this will allow docker push s image construction work to proceed | 1 |
747,961 | 26,102,578,296 | IssuesEvent | 2022-12-27 09:07:24 | bounswe/bounswe2022group9 | https://api.github.com/repos/bounswe/bounswe2022group9 | closed | [Mobile] Unfollow button | Priority: Medium Mobile | Currently, this function is not available to the user because the unfollow button is disabled. Activation of this button needs to be linked to the unfollow endpoint in the backend.
Deadline : 26.12.2022 , Sunday 17.00 | 1.0 | [Mobile] Unfollow button - Currently, this function is not available to the user because the unfollow button is disabled. Activation of this button needs to be linked to the unfollow endpoint in the backend.
Deadline : 26.12.2022 , Sunday 17.00 | priority | unfollow button currently this function is not available to the user because the unfollow button is disabled activation of this button needs to be linked to the unfollow endpoint in the backend deadline sunday | 1 |
730,446 | 25,173,146,193 | IssuesEvent | 2022-11-11 06:25:15 | saudalnasser/strifelux | https://api.github.com/repos/saudalnasser/strifelux | opened | feat: modals | type: feature priority: medium | ## Problem
need an easy way to create and handle modals in an organized way and with minimal effort.
## Solution(s)
provide an easy way of:
- handling modals
- organizing modals
| 1.0 | feat: modals - ## Problem
need an easy way to create and handle modals in an organized way and with minimal effort.
## Solution(s)
provide an easy way of:
- handling modals
- organizing modals
| priority | feat modals problem need an easy way to create and handle modals in an organized way and with minimal effort solution s provide an easy way of handling modals organizing modals | 1 |
299,919 | 9,205,970,575 | IssuesEvent | 2019-03-08 12:16:52 | canonical-websites/build.snapcraft.io | https://api.github.com/repos/canonical-websites/build.snapcraft.io | closed | Add global nav | Priority: Medium | To align with the rest of the sites, we should include the global nav in build so it matches the rest of the sections of snapcrat.io | 1.0 | Add global nav - To align with the rest of the sites, we should include the global nav in build so it matches the rest of the sections of snapcrat.io | priority | add global nav to align with the rest of the sites we should include the global nav in build so it matches the rest of the sections of snapcrat io | 1 |
578,166 | 17,145,843,464 | IssuesEvent | 2021-07-13 14:31:16 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | closed | Add payment_type or full payment to event admin API | api events feature payments priority: medium | ### Motivation
`api/v2/admin/events/<eventPk>/registrations/` currently only gives the uuid of a payment, so to display in the admin screen how it was paid, the payment must be requested separately. Doing this for all of the registrations would be very inefficient (like 40 extra requests to load the event admin). If we simply add the payment_type or replace the payment uuid with a payment serializer, it will be much simpler.
| 1.0 | Add payment_type or full payment to event admin API - ### Motivation
`api/v2/admin/events/<eventPk>/registrations/` currently only gives the uuid of a payment, so to display in the admin screen how it was paid, the payment must be requested separately. Doing this for all of the registrations would be very inefficient (like 40 extra requests to load the event admin). If we simply add the payment_type or replace the payment uuid with a payment serializer, it will be much simpler.
| priority | add payment type or full payment to event admin api motivation api admin events registrations currently only gives the uuid of a payment so to display in the admin screen how it was paid the payment must be requested separately doing this for all of the registrations would be very inefficient like extra requests to load the event admin if we simply add the payment type or replace the payment uuid with a payment serializer it will be much simpler | 1 |
669,983 | 22,648,455,568 | IssuesEvent | 2022-07-01 11:04:06 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [Phase 1][YSQL][Colocation] Get rid of tablegroup_oid reloption | kind/enhancement area/ysql priority/medium | Jira Link: [DB-607](https://yugabyte.atlassian.net/browse/DB-607)
### Description
Right now, `tablegroup_oid` is the only YB-speicifc option we store in reloptions rather than in YB-specific struct. There's little reason for that. | 1.0 | [Phase 1][YSQL][Colocation] Get rid of tablegroup_oid reloption - Jira Link: [DB-607](https://yugabyte.atlassian.net/browse/DB-607)
### Description
Right now, `tablegroup_oid` is the only YB-speicifc option we store in reloptions rather than in YB-specific struct. There's little reason for that. | priority | get rid of tablegroup oid reloption jira link description right now tablegroup oid is the only yb speicifc option we store in reloptions rather than in yb specific struct there s little reason for that | 1 |
52,460 | 3,023,532,115 | IssuesEvent | 2015-08-01 16:01:23 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | opened | patch installation: file version.txt should only be updated at the end of the installation | installation priority: medium | in case there is a problem, the file "version.txt" should not be updated. Currently, it is copied at the beginning of the procedure. | 1.0 | patch installation: file version.txt should only be updated at the end of the installation - in case there is a problem, the file "version.txt" should not be updated. Currently, it is copied at the beginning of the procedure. | priority | patch installation file version txt should only be updated at the end of the installation in case there is a problem the file version txt should not be updated currently it is copied at the beginning of the procedure | 1 |
590,318 | 17,776,121,340 | IssuesEvent | 2021-08-30 19:27:28 | PolyPup-Farm/polypup-ui-alpha-testers | https://api.github.com/repos/PolyPup-Farm/polypup-ui-alpha-testers | closed | Unable to connect wallet to PolyPup UI (Brave Browser) | enhancement medium priority | Unable to connect wallet to PolyPup UI after multiple attempts. Button highlight appears on click but no other response from UI
Browser: Brave
Wallet: Metamask


| 1.0 | Unable to connect wallet to PolyPup UI (Brave Browser) - Unable to connect wallet to PolyPup UI after multiple attempts. Button highlight appears on click but no other response from UI
Browser: Brave
Wallet: Metamask


| priority | unable to connect wallet to polypup ui brave browser unable to connect wallet to polypup ui after multiple attempts button highlight appears on click but no other response from ui browser brave wallet metamask | 1 |
264,638 | 8,317,819,220 | IssuesEvent | 2018-09-25 13:13:19 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | closed | Signal handler registered outside Python causes error | bug priority: medium | I am using robot.run in a script of mine, which I execute from another open application, to run some robot tests. Trouble is, I keep getting the following TypeError when I try and execute my script.

This is preventing my tests from finishing and tearing down properly.
Upon some inspection, it looks like self._orig_sigint and self._orig_sigterm are getting set to "None" from signal.getsignal()

Oddly enough, this only happens when I first run my script. If I re-run the script again, it works fine.
I can edit some things myself in signalhandler.py to get it to work, but I'd rather any changes like that come from the official robotframework folks :) | 1.0 | Signal handler registered outside Python causes error - I am using robot.run in a script of mine, which I execute from another open application, to run some robot tests. Trouble is, I keep getting the following TypeError when I try and execute my script.

This is preventing my tests from finishing and tearing down properly.
Upon some inspection, it looks like self._orig_sigint and self._orig_sigterm are getting set to "None" from signal.getsignal()

Oddly enough, this only happens when I first run my script. If I re-run the script again, it works fine.
I can edit some things myself in signalhandler.py to get it to work, but I'd rather any changes like that come from the official robotframework folks :) | priority | signal handler registered outside python causes error i am using robot run in a script of mine which i execute from another open application to run some robot tests trouble is i keep getting the following typeerror when i try and execute my script this is preventing my tests from finishing and tearing down properly upon some inspection it looks like self orig sigint and self orig sigterm are getting set to none from signal getsignal oddly enough this only happens when i first run my script if i re run the script again it works fine i can edit some things myself in signalhandler py to get it to work but i d rather any changes like that come from the official robotframework folks | 1 |
503,454 | 14,592,201,049 | IssuesEvent | 2020-12-19 16:31:55 | alexleen/log4net-config-editor | https://api.github.com/repos/alexleen/log4net-config-editor | opened | Support R/W From Element Inner Text | enhancement medium priority | The log4net library expects property values to be located in the `value` attribute of an element. If not, it will also search the element's inner text. This editor currently does not read or write from/to an elemen's inner text.
See [XmlHierarchyConfigurator.cs L629](https://github.com/apache/logging-log4net/blob/2b8b17085995f64edb7c5892fb808c6e2af124ec/src/log4net/Repository/Hierarchy/XmlHierarchyConfigurator.cs#L629) | 1.0 | Support R/W From Element Inner Text - The log4net library expects property values to be located in the `value` attribute of an element. If not, it will also search the element's inner text. This editor currently does not read or write from/to an elemen's inner text.
See [XmlHierarchyConfigurator.cs L629](https://github.com/apache/logging-log4net/blob/2b8b17085995f64edb7c5892fb808c6e2af124ec/src/log4net/Repository/Hierarchy/XmlHierarchyConfigurator.cs#L629) | priority | support r w from element inner text the library expects property values to be located in the value attribute of an element if not it will also search the element s inner text this editor currently does not read or write from to an elemen s inner text see | 1 |
762,549 | 26,722,832,727 | IssuesEvent | 2023-01-29 10:55:23 | containrrr/watchtower | https://api.github.com/repos/containrrr/watchtower | closed | Cannot pull image from registries except docker hub. | Type: Bug Priority: Medium Status: Available | ### Describe the bug
I tried to enable auto upgrade on umami, an opensource web statistic tool, whose image name is `docker.umami.dev/umami-software/umami:mysql-latest`. It's obvious that the image is on `docker.umami.dev`, but watchtower tried to pull image from docker hub and failed.
### Steps to reproduce
1. start a container from `docker.umami.dev/umami-software/umami:mysql-latest`
2. enable auto upgrade on the new container
3. wait trigger auto update
4. See error
### Expected behavior
pull image from docker.umami.dev
### Screenshots
_No response_
### Environment
- Platform: Ubuntu 20.04
- Architecture: amd64
- Docker Version: 20.10.21
### Your logs
```text
time="2022-12-04T16:38:53+08:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2022-12-04T16:38:53+08:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2022-12-04T16:38:54+08:00" level=debug msg="Making sure everything is sane before starting"
time="2022-12-04T16:38:54+08:00" level=debug msg="Retrieving running containers"
time="2022-12-04T16:38:54+08:00" level=info msg="Watchtower 1.5.1"
time="2022-12-04T16:38:54+08:00" level=info msg="Using notifications: gotify"
time="2022-12-04T16:38:54+08:00" level=info msg="Only checking containers using enable label"
time="2022-12-04T16:38:54+08:00" level=info msg="Running a one time update."
time="2022-12-04T16:38:54+08:00" level=debug msg="Checking containers for updated images"
time="2022-12-04T16:38:54+08:00" level=debug msg="Retrieving running containers"
time="2022-12-04T16:38:54+08:00" level=debug msg="Trying to load authentication credentials." container=/phpMyAdmin image="docker.io/phpmyadmin:latest"
time="2022-12-04T16:38:54+08:00" level=debug msg="No credentials for docker.io found" config_file=/config.json
time="2022-12-04T16:38:54+08:00" level=debug msg="Got image name: docker.io/phpmyadmin:latest"
time="2022-12-04T16:38:54+08:00" level=debug msg="Checking if pull is needed" container=/phpMyAdmin image="docker.io/phpmyadmin:latest"
time="2022-12-04T16:38:54+08:00" level=debug msg="Building challenge URL" URL="https://index.docker.io/v2/"
time="2022-12-04T16:38:55+08:00" level=debug msg="Got response to challenge request" header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\"" status="401 Unauthorized"
time="2022-12-04T16:38:55+08:00" level=debug msg="Checking challenge header content" realm="https://auth.docker.io/token" service=registry.docker.io
time="2022-12-04T16:38:55+08:00" level=debug msg="Setting scope for auth token" image=docker.io/phpmyadmin scope="repository:library/phpmyadmin:pull"
time="2022-12-04T16:38:55+08:00" level=debug msg="No credentials found."
time="2022-12-04T16:38:56+08:00" level=debug msg="Parsing image ref" host=index.docker.io image=docker.io/phpmyadmin normalized="docker.io/library/phpmyadmin:latest" tag=latest
time="2022-12-04T16:38:56+08:00" level=debug msg="Doing a HEAD request to fetch a digest" url="https://index.docker.io/v2/library/phpmyadmin/manifests/latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="Found a remote digest to compare with" remote="sha256:3792514e6f6d38819dd8fbb659386bd8ef0019cc7e3fb44ea7c165ff0271ee4f"
time="2022-12-04T16:38:56+08:00" level=debug msg=Comparing local="sha256:3792514e6f6d38819dd8fbb659386bd8ef0019cc7e3fb44ea7c165ff0271ee4f" remote="sha256:3792514e6f6d38819dd8fbb659386bd8ef0019cc7e3fb44ea7c165ff0271ee4f"
time="2022-12-04T16:38:56+08:00" level=debug msg="Found a match"
time="2022-12-04T16:38:56+08:00" level=debug msg="No pull needed. Skipping image."
time="2022-12-04T16:38:56+08:00" level=debug msg="No new images found for /phpMyAdmin"
time="2022-12-04T16:38:56+08:00" level=debug msg="Trying to load authentication credentials." container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="No credentials for docker.io found" config_file=/config.json
time="2022-12-04T16:38:56+08:00" level=debug msg="Got image name: docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="Checking if pull is needed" container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="Building challenge URL" URL="https://index.docker.io/v2/"
time="2022-12-04T16:38:57+08:00" level=debug msg="Got response to challenge request" header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\"" status="401 Unauthorized"
time="2022-12-04T16:38:57+08:00" level=debug msg="Checking challenge header content" realm="https://auth.docker.io/token" service=registry.docker.io
time="2022-12-04T16:38:57+08:00" level=debug msg="Setting scope for auth token" image=docker.io/docker.umami.dev/umami-software/umami scope="repository:docker.umami.dev/umami-software/umami:pull"
time="2022-12-04T16:38:57+08:00" level=debug msg="No credentials found."
time="2022-12-04T16:38:57+08:00" level=debug msg="Parsing image ref" host=index.docker.io image=docker.io/docker.umami.dev/umami-software/umami normalized="docker.io/docker.umami.dev/umami-software/umami:mysql-latest" tag=mysql-latest
time="2022-12-04T16:38:57+08:00" level=debug msg="Doing a HEAD request to fetch a digest" url="https://index.docker.io/v2/docker.umami.dev/umami-software/umami/manifests/mysql-latest"
time="2022-12-04T16:38:58+08:00" level=warning msg="Could not do a head request for \"docker.io/docker.umami.dev/umami-software/umami:mysql-latest\", falling back to regular pull." container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:58+08:00" level=warning msg="Reason: registry responded to head request with \"401 Unauthorized\", auth: \"Bearer realm=\\\"https://auth.docker.io/token\\\",service=\\\"registry.docker.io\\\",scope=\\\"repository:docker.umami.dev/umami-software/umami:pull\\\",error=\\\"insufficient_scope\\\"\"" container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:58+08:00" level=debug msg="Pulling image" container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:39:02+08:00" level=debug msg="No new images found for /Umami"
time="2022-12-04T16:39:02+08:00" level=info msg="Session done" Failed=0 Scanned=2 Updated=0 notify=no
time="2022-12-04T16:39:02+08:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
```
### Additional context
_No response_ | 1.0 | Cannot pull image from registries except docker hub. - ### Describe the bug
I tried to enable auto upgrade on umami, an opensource web statistic tool, whose image name is `docker.umami.dev/umami-software/umami:mysql-latest`. It's obvious that the image is on `docker.umami.dev`, but watchtower tried to pull image from docker hub and failed.
### Steps to reproduce
1. start a container from `docker.umami.dev/umami-software/umami:mysql-latest`
2. enable auto upgrade on the new container
3. wait trigger auto update
4. See error
### Expected behavior
pull image from docker.umami.dev
### Screenshots
_No response_
### Environment
- Platform: Ubuntu 20.04
- Architecture: amd64
- Docker Version: 20.10.21
### Your logs
```text
time="2022-12-04T16:38:53+08:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2022-12-04T16:38:53+08:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2022-12-04T16:38:54+08:00" level=debug msg="Making sure everything is sane before starting"
time="2022-12-04T16:38:54+08:00" level=debug msg="Retrieving running containers"
time="2022-12-04T16:38:54+08:00" level=info msg="Watchtower 1.5.1"
time="2022-12-04T16:38:54+08:00" level=info msg="Using notifications: gotify"
time="2022-12-04T16:38:54+08:00" level=info msg="Only checking containers using enable label"
time="2022-12-04T16:38:54+08:00" level=info msg="Running a one time update."
time="2022-12-04T16:38:54+08:00" level=debug msg="Checking containers for updated images"
time="2022-12-04T16:38:54+08:00" level=debug msg="Retrieving running containers"
time="2022-12-04T16:38:54+08:00" level=debug msg="Trying to load authentication credentials." container=/phpMyAdmin image="docker.io/phpmyadmin:latest"
time="2022-12-04T16:38:54+08:00" level=debug msg="No credentials for docker.io found" config_file=/config.json
time="2022-12-04T16:38:54+08:00" level=debug msg="Got image name: docker.io/phpmyadmin:latest"
time="2022-12-04T16:38:54+08:00" level=debug msg="Checking if pull is needed" container=/phpMyAdmin image="docker.io/phpmyadmin:latest"
time="2022-12-04T16:38:54+08:00" level=debug msg="Building challenge URL" URL="https://index.docker.io/v2/"
time="2022-12-04T16:38:55+08:00" level=debug msg="Got response to challenge request" header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\"" status="401 Unauthorized"
time="2022-12-04T16:38:55+08:00" level=debug msg="Checking challenge header content" realm="https://auth.docker.io/token" service=registry.docker.io
time="2022-12-04T16:38:55+08:00" level=debug msg="Setting scope for auth token" image=docker.io/phpmyadmin scope="repository:library/phpmyadmin:pull"
time="2022-12-04T16:38:55+08:00" level=debug msg="No credentials found."
time="2022-12-04T16:38:56+08:00" level=debug msg="Parsing image ref" host=index.docker.io image=docker.io/phpmyadmin normalized="docker.io/library/phpmyadmin:latest" tag=latest
time="2022-12-04T16:38:56+08:00" level=debug msg="Doing a HEAD request to fetch a digest" url="https://index.docker.io/v2/library/phpmyadmin/manifests/latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="Found a remote digest to compare with" remote="sha256:3792514e6f6d38819dd8fbb659386bd8ef0019cc7e3fb44ea7c165ff0271ee4f"
time="2022-12-04T16:38:56+08:00" level=debug msg=Comparing local="sha256:3792514e6f6d38819dd8fbb659386bd8ef0019cc7e3fb44ea7c165ff0271ee4f" remote="sha256:3792514e6f6d38819dd8fbb659386bd8ef0019cc7e3fb44ea7c165ff0271ee4f"
time="2022-12-04T16:38:56+08:00" level=debug msg="Found a match"
time="2022-12-04T16:38:56+08:00" level=debug msg="No pull needed. Skipping image."
time="2022-12-04T16:38:56+08:00" level=debug msg="No new images found for /phpMyAdmin"
time="2022-12-04T16:38:56+08:00" level=debug msg="Trying to load authentication credentials." container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="No credentials for docker.io found" config_file=/config.json
time="2022-12-04T16:38:56+08:00" level=debug msg="Got image name: docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="Checking if pull is needed" container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:56+08:00" level=debug msg="Building challenge URL" URL="https://index.docker.io/v2/"
time="2022-12-04T16:38:57+08:00" level=debug msg="Got response to challenge request" header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\"" status="401 Unauthorized"
time="2022-12-04T16:38:57+08:00" level=debug msg="Checking challenge header content" realm="https://auth.docker.io/token" service=registry.docker.io
time="2022-12-04T16:38:57+08:00" level=debug msg="Setting scope for auth token" image=docker.io/docker.umami.dev/umami-software/umami scope="repository:docker.umami.dev/umami-software/umami:pull"
time="2022-12-04T16:38:57+08:00" level=debug msg="No credentials found."
time="2022-12-04T16:38:57+08:00" level=debug msg="Parsing image ref" host=index.docker.io image=docker.io/docker.umami.dev/umami-software/umami normalized="docker.io/docker.umami.dev/umami-software/umami:mysql-latest" tag=mysql-latest
time="2022-12-04T16:38:57+08:00" level=debug msg="Doing a HEAD request to fetch a digest" url="https://index.docker.io/v2/docker.umami.dev/umami-software/umami/manifests/mysql-latest"
time="2022-12-04T16:38:58+08:00" level=warning msg="Could not do a head request for \"docker.io/docker.umami.dev/umami-software/umami:mysql-latest\", falling back to regular pull." container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:58+08:00" level=warning msg="Reason: registry responded to head request with \"401 Unauthorized\", auth: \"Bearer realm=\\\"https://auth.docker.io/token\\\",service=\\\"registry.docker.io\\\",scope=\\\"repository:docker.umami.dev/umami-software/umami:pull\\\",error=\\\"insufficient_scope\\\"\"" container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:38:58+08:00" level=debug msg="Pulling image" container=/Umami image="docker.io/docker.umami.dev/umami-software/umami:mysql-latest"
time="2022-12-04T16:39:02+08:00" level=debug msg="No new images found for /Umami"
time="2022-12-04T16:39:02+08:00" level=info msg="Session done" Failed=0 Scanned=2 Updated=0 notify=no
time="2022-12-04T16:39:02+08:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
```
### Additional context
_No response_ | priority | cannot pull image from registries except docker hub describe the bug i tried to enable auto upgrade on umami an opensource web statistic tool whose image name is docker umami dev umami software umami mysql latest it s obvious that the image is on docker umami dev but watchtower tried to pull image from docker hub and failed steps to reproduce start a container from docker umami dev umami software umami mysql latest enable auto upgrade on the new container wait trigger auto update see error expected behavior pull image from docker umami dev screenshots no response environment platform ubuntu architecture docker version your logs text time level warning msg using an http url for gotify is insecure time level debug msg sleeping for a second to ensure the docker api client has been properly initialized time level debug msg making sure everything is sane before starting time level debug msg retrieving running containers time level info msg watchtower time level info msg using notifications gotify time level info msg only checking containers using enable label time level info msg running a one time update time level debug msg checking containers for updated images time level debug msg retrieving running containers time level debug msg trying to load authentication credentials container phpmyadmin image docker io phpmyadmin latest time level debug msg no credentials for docker io found config file config json time level debug msg got image name docker io phpmyadmin latest time level debug msg checking if pull is needed container phpmyadmin image docker io phpmyadmin latest time level debug msg building challenge url url time level debug msg got response to challenge request header bearer realm status unauthorized time level debug msg checking challenge header content realm service registry docker io time level debug msg setting scope for auth token image docker io phpmyadmin scope repository library phpmyadmin pull time level debug msg no credentials found time level debug msg parsing image ref host index docker io image docker io phpmyadmin normalized docker io library phpmyadmin latest tag latest time level debug msg doing a head request to fetch a digest url time level debug msg found a remote digest to compare with remote time level debug msg comparing local remote time level debug msg found a match time level debug msg no pull needed skipping image time level debug msg no new images found for phpmyadmin time level debug msg trying to load authentication credentials container umami image docker io docker umami dev umami software umami mysql latest time level debug msg no credentials for docker io found config file config json time level debug msg got image name docker io docker umami dev umami software umami mysql latest time level debug msg checking if pull is needed container umami image docker io docker umami dev umami software umami mysql latest time level debug msg building challenge url url time level debug msg got response to challenge request header bearer realm status unauthorized time level debug msg checking challenge header content realm service registry docker io time level debug msg setting scope for auth token image docker io docker umami dev umami software umami scope repository docker umami dev umami software umami pull time level debug msg no credentials found time level debug msg parsing image ref host index docker io image docker io docker umami dev umami software umami normalized docker io docker umami dev umami software umami mysql latest tag mysql latest time level debug msg doing a head request to fetch a digest url time level warning msg could not do a head request for docker io docker umami dev umami software umami mysql latest falling back to regular pull container umami image docker io docker umami dev umami software umami mysql latest time level warning msg reason registry responded to head request with unauthorized auth bearer realm container umami image docker io docker umami dev umami software umami mysql latest time level debug msg pulling image container umami image docker io docker umami dev umami software umami mysql latest time level debug msg no new images found for umami time level info msg session done failed scanned updated notify no time level info msg waiting for the notification goroutine to finish notify no additional context no response | 1 |
805,276 | 29,515,491,572 | IssuesEvent | 2023-06-04 12:47:48 | polyphony-chat/chorus | https://api.github.com/repos/polyphony-chat/chorus | closed | Reactions | Priority: Medium Status: In Progress Type: Enhancement Difficulty: Easy | Add all reactions routes.
- [x] DELETE /channels/{channel_id}/messages/{message_id}/reactions/
- [x] GET /channels/{channel_id}/messages/{message_id}/reactions/{emoji}
- [x] DELETE /channels/{channel_id}/messages/{message_id}/reactions/{emoji}
- [x] DELETE /channels/{channel_id}/messages/{message_id}/reactions/{emoji}/{user_id}
- [x] PUT /channels/{channel.id}/messages/{message.id}/reactions/{emoji}/@me
- [x] DELETE/channels/{channel.id}/messages/{message.id}/reactions/{emoji}/@me | 1.0 | Reactions - Add all reactions routes.
- [x] DELETE /channels/{channel_id}/messages/{message_id}/reactions/
- [x] GET /channels/{channel_id}/messages/{message_id}/reactions/{emoji}
- [x] DELETE /channels/{channel_id}/messages/{message_id}/reactions/{emoji}
- [x] DELETE /channels/{channel_id}/messages/{message_id}/reactions/{emoji}/{user_id}
- [x] PUT /channels/{channel.id}/messages/{message.id}/reactions/{emoji}/@me
- [x] DELETE/channels/{channel.id}/messages/{message.id}/reactions/{emoji}/@me | priority | reactions add all reactions routes delete channels channel id messages message id reactions get channels channel id messages message id reactions emoji delete channels channel id messages message id reactions emoji delete channels channel id messages message id reactions emoji user id put channels channel id messages message id reactions emoji me delete channels channel id messages message id reactions emoji me | 1 |
254,718 | 8,087,272,642 | IssuesEvent | 2018-08-09 00:42:15 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Tool Tip arrows don't point at what they are tool tipping | Medium Priority | 
Only for the "side" tooltips | 1.0 | Tool Tip arrows don't point at what they are tool tipping - 
Only for the "side" tooltips | priority | tool tip arrows don t point at what they are tool tipping only for the side tooltips | 1 |
44,073 | 2,899,106,765 | IssuesEvent | 2015-06-17 09:15:06 | greenlion/PHP-SQL-Parser | https://api.github.com/repos/greenlion/PHP-SQL-Parser | closed | [REQUEST] Start and stop positions of the statement components within the SQL string | bug imported Priority-Medium | _From [pho...@gmx.de](https://code.google.com/u/109317404671582518013/) on January 20, 2012 09:23:54_
Is it possible to return the positions of the statement components within the original SQL string? I have to replace some column names, but I don't know, where the column name is used within the statement. A simple string replace doesn't work in all cases.
Thanks
Andre
_Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=19_ | 1.0 | [REQUEST] Start and stop positions of the statement components within the SQL string - _From [pho...@gmx.de](https://code.google.com/u/109317404671582518013/) on January 20, 2012 09:23:54_
Is it possible to return the positions of the statement components within the original SQL string? I have to replace some column names, but I don't know, where the column name is used within the statement. A simple string replace doesn't work in all cases.
Thanks
Andre
_Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=19_ | priority | start and stop positions of the statement components within the sql string from on january is it possible to return the positions of the statement components within the original sql string i have to replace some column names but i don t know where the column name is used within the statement a simple string replace doesn t work in all cases thanks andre original issue | 1 |
541,979 | 15,836,883,435 | IssuesEvent | 2021-04-06 19:58:38 | dietterc/SEO-ker | https://api.github.com/repos/dietterc/SEO-ker | opened | Stop players from submitting a bet of zero. | developer task feature 3 medium priority | There is no incentive to bet, we need to stop players from submitting 0 in their bet.
Maybe a ante method, or just ensure a non negative number. I think we should force players to match the highest bet, but needs discussion. | 1.0 | Stop players from submitting a bet of zero. - There is no incentive to bet, we need to stop players from submitting 0 in their bet.
Maybe a ante method, or just ensure a non negative number. I think we should force players to match the highest bet, but needs discussion. | priority | stop players from submitting a bet of zero there is no incentive to bet we need to stop players from submitting in their bet maybe a ante method or just ensure a non negative number i think we should force players to match the highest bet but needs discussion | 1 |
94,643 | 3,930,640,747 | IssuesEvent | 2016-04-25 08:59:25 | mozilla/bidpom | https://api.github.com/repos/mozilla/bidpom | closed | figure out how to identify failure due to wrong password | priority medium | i had some tests fail because i used the wrong password, the screenshot was as if sign in had never been pressed, because we automatically close the popup window rather than waiting for it to close on it's own.
the notification in the popup that explains the issue is only visible for max 2 seconds, making it hard to catch with selenium
@davehunt, i have an untested stash with changes that looks for the popup window by name rather than title, as well as waiting for the popup window to close rather than closing it forcefully. | 1.0 | figure out how to identify failure due to wrong password - i had some tests fail because i used the wrong password, the screenshot was as if sign in had never been pressed, because we automatically close the popup window rather than waiting for it to close on it's own.
the notification in the popup that explains the issue is only visible for max 2 seconds, making it hard to catch with selenium
@davehunt, i have an untested stash with changes that looks for the popup window by name rather than title, as well as waiting for the popup window to close rather than closing it forcefully. | priority | figure out how to identify failure due to wrong password i had some tests fail because i used the wrong password the screenshot was as if sign in had never been pressed because we automatically close the popup window rather than waiting for it to close on it s own the notification in the popup that explains the issue is only visible for max seconds making it hard to catch with selenium davehunt i have an untested stash with changes that looks for the popup window by name rather than title as well as waiting for the popup window to close rather than closing it forcefully | 1 |
802,587 | 28,967,706,745 | IssuesEvent | 2023-05-10 09:02:39 | OpenBioML/chemnlp | https://api.github.com/repos/OpenBioML/chemnlp | closed | Finetune 2.8B model on chemrxiv | work package: model training priority: medium | * blocked by outcomes of #170
This will include
1) finding the optimal pipeline configuration for this model size
2) training this model on the cluster using this configuration for full model finetuning
3) saving these checkpoints so that the evaluation pipeline can process them
We want to get this model to the stage where others can iterate on training aspects such as learning rate scheduler, optimiser, number of epochs, etc with training efficiency staying high. | 1.0 | Finetune 2.8B model on chemrxiv - * blocked by outcomes of #170
This will include
1) finding the optimal pipeline configuration for this model size
2) training this model on the cluster using this configuration for full model finetuning
3) saving these checkpoints so that the evaluation pipeline can process them
We want to get this model to the stage where others can iterate on training aspects such as learning rate scheduler, optimiser, number of epochs, etc with training efficiency staying high. | priority | finetune model on chemrxiv blocked by outcomes of this will include finding the optimal pipeline configuration for this model size training this model on the cluster using this configuration for full model finetuning saving these checkpoints so that the evaluation pipeline can process them we want to get this model to the stage where others can iterate on training aspects such as learning rate scheduler optimiser number of epochs etc with training efficiency staying high | 1 |
184,135 | 6,705,983,781 | IssuesEvent | 2017-10-12 04:00:03 | MikeSmvl/classifieds | https://api.github.com/repos/MikeSmvl/classifieds | opened | Password Strength meter | Points: 3 Priority: Medium Type: Feature | Description:
User is not aware of password Strength.
Done When:
User is able to see the strength of his password.
| 1.0 | Password Strength meter - Description:
User is not aware of password Strength.
Done When:
User is able to see the strength of his password.
| priority | password strength meter description user is not aware of password strength done when user is able to see the strength of his password | 1 |
578,663 | 17,149,708,587 | IssuesEvent | 2021-07-13 18:48:16 | CyanLabs/Syn3Updater | https://api.github.com/repos/CyanLabs/Syn3Updater | closed | Add possibility to de-/activate edit advanced settings per profile | Priority: Medium Type: Bug | As title says :D
Cause now, when u activate the Edit Advanced Settings in one profile, it's also active in the other profile and visa versa...
For some profiles I want to have it activated, for some deactivated | 1.0 | Add possibility to de-/activate edit advanced settings per profile - As title says :D
Cause now, when u activate the Edit Advanced Settings in one profile, it's also active in the other profile and visa versa...
For some profiles I want to have it activated, for some deactivated | priority | add possibility to de activate edit advanced settings per profile as title says d cause now when u activate the edit advanced settings in one profile it s also active in the other profile and visa versa for some profiles i want to have it activated for some deactivated | 1 |
76,128 | 3,481,800,847 | IssuesEvent | 2015-12-29 18:34:24 | phetsims/tasks | https://api.github.com/repos/phetsims/tasks | opened | Test Java version of Isotopes and Atomic mass | Medium Priority QA | @bryo5363 can you test the Java version of Isotopes and Atomic Mass. We are porting this sim, and it would be good to identify any existing bugs. | 1.0 | Test Java version of Isotopes and Atomic mass - @bryo5363 can you test the Java version of Isotopes and Atomic Mass. We are porting this sim, and it would be good to identify any existing bugs. | priority | test java version of isotopes and atomic mass can you test the java version of isotopes and atomic mass we are porting this sim and it would be good to identify any existing bugs | 1 |
552,382 | 16,239,698,289 | IssuesEvent | 2021-05-07 07:57:25 | musescore/MuseScore | https://api.github.com/repos/musescore/MuseScore | opened | [MU4 Issue] Can`t select duplets and others from Note Input Bar | Medium Priority | **Describe the bug**
Can`t select tuplets and others from Note Input Bar
**To Reproduce**
Steps to reproduce the behavior:
1. Create a score
2. Add a note
3. Select note
2. Click on duplet icon from Note Input bar
3. Try to add duplet, triplet, etc.
**Expected behavior**
Duplet, triplet, etc. options blocked
**Screenshots**

**Desktop (please complete the following information):**
Linux
Windows 10
**Additional context**
Add any other context about the problem here.
| 1.0 | [MU4 Issue] Can`t select duplets and others from Note Input Bar - **Describe the bug**
Can`t select tuplets and others from Note Input Bar
**To Reproduce**
Steps to reproduce the behavior:
1. Create a score
2. Add a note
3. Select note
2. Click on duplet icon from Note Input bar
3. Try to add duplet, triplet, etc.
**Expected behavior**
Duplet, triplet, etc. options blocked
**Screenshots**

**Desktop (please complete the following information):**
Linux
Windows 10
**Additional context**
Add any other context about the problem here.
| priority | can t select duplets and others from note input bar describe the bug can t select tuplets and others from note input bar to reproduce steps to reproduce the behavior create a score add a note select note click on duplet icon from note input bar try to add duplet triplet etc expected behavior duplet triplet etc options blocked screenshots desktop please complete the following information linux windows additional context add any other context about the problem here | 1 |
235,875 | 7,743,663,758 | IssuesEvent | 2018-05-29 13:28:46 | ELVIS-Project/elvis-database | https://api.github.com/repos/ELVIS-Project/elvis-database | closed | Empty download files in the local deployment | Priority: MEDIUM | When downloading a media file from the local deployment of the database, the files are all empty.
| 1.0 | Empty download files in the local deployment - When downloading a media file from the local deployment of the database, the files are all empty.
| priority | empty download files in the local deployment when downloading a media file from the local deployment of the database the files are all empty | 1 |
500,632 | 14,503,227,288 | IssuesEvent | 2020-12-11 22:19:11 | CICE-Consortium/CICE | https://api.github.com/repos/CICE-Consortium/CICE | closed | timeseries plotting errors | Priority: Medium Scripts Tools Type: Bug | The timeseries plotter appears to be picking up values that it should not.
These issues were noted in #533, for nonstandard plots (so the problem might be my editing of the timeseries script, rather than a problem originally in the script). They occur with all types of forcing (JRA, CORE, NCARbulk). I'm attaching two examples, one for sst and one for the area-weighted total energy plots.
[examples.zip](https://github.com/CICE-Consortium/CICE/files/5653726/examples.zip)
In the sst example,
```
grep sst cice.runlog* > sst_gx1_jra_all
grep "sst (C)" cice.runlog* > sst_gx1_jra_C
diff sst_gx1_jra_all sst_gx1_jra_C > diff_sst
less diff_sst
< min/max sst, frzmlt
< min, max, sum = -1.96869665044293 32.9603346231369 1525878.86583381 sst
33d30
< min, max, sum = -1.90458264992426 32.7596970527221 1530936.61448477 sst
64d60
< min, max, sum = -1.90458264992426 31.7854126938785 1534787.65564628 sst
95d90
< min, max, sum = -1.90458264992426 31.9466864214512 1526806.43659672 sst
```
The last values (~1.5e6) in the latter four lines in the diff_sst file appear to correspond with the four spikes in the plot. The timeseries plot was created with timeseries_all.csh, which greps on the "ssh (C)" form, so I don't think it should be picking up these values. But maybe I don't understand how the script works. Could this be a line-continuation problem?
A similar thing seems to be happening in the arwt_tot_energy plot (but not for the arwt_tot_energy_chng plot).
I didn't try the python script.
Also, it might be nice to add options for plotting more variables to the script (or add scripts). Just a thought.
| 1.0 | timeseries plotting errors - The timeseries plotter appears to be picking up values that it should not.
These issues were noted in #533, for nonstandard plots (so the problem might be my editing of the timeseries script, rather than a problem originally in the script). They occur with all types of forcing (JRA, CORE, NCARbulk). I'm attaching two examples, one for sst and one for the area-weighted total energy plots.
[examples.zip](https://github.com/CICE-Consortium/CICE/files/5653726/examples.zip)
In the sst example,
```
grep sst cice.runlog* > sst_gx1_jra_all
grep "sst (C)" cice.runlog* > sst_gx1_jra_C
diff sst_gx1_jra_all sst_gx1_jra_C > diff_sst
less diff_sst
< min/max sst, frzmlt
< min, max, sum = -1.96869665044293 32.9603346231369 1525878.86583381 sst
33d30
< min, max, sum = -1.90458264992426 32.7596970527221 1530936.61448477 sst
64d60
< min, max, sum = -1.90458264992426 31.7854126938785 1534787.65564628 sst
95d90
< min, max, sum = -1.90458264992426 31.9466864214512 1526806.43659672 sst
```
The last values (~1.5e6) in the latter four lines in the diff_sst file appear to correspond with the four spikes in the plot. The timeseries plot was created with timeseries_all.csh, which greps on the "ssh (C)" form, so I don't think it should be picking up these values. But maybe I don't understand how the script works. Could this be a line-continuation problem?
A similar thing seems to be happening in the arwt_tot_energy plot (but not for the arwt_tot_energy_chng plot).
I didn't try the python script.
Also, it might be nice to add options for plotting more variables to the script (or add scripts). Just a thought.
| priority | timeseries plotting errors the timeseries plotter appears to be picking up values that it should not these issues were noted in for nonstandard plots so the problem might be my editing of the timeseries script rather than a problem originally in the script they occur with all types of forcing jra core ncarbulk i m attaching two examples one for sst and one for the area weighted total energy plots in the sst example grep sst cice runlog sst jra all grep sst c cice runlog sst jra c diff sst jra all sst jra c diff sst less diff sst min max sst frzmlt min max sum sst min max sum sst min max sum sst min max sum sst the last values in the latter four lines in the diff sst file appear to correspond with the four spikes in the plot the timeseries plot was created with timeseries all csh which greps on the ssh c form so i don t think it should be picking up these values but maybe i don t understand how the script works could this be a line continuation problem a similar thing seems to be happening in the arwt tot energy plot but not for the arwt tot energy chng plot i didn t try the python script also it might be nice to add options for plotting more variables to the script or add scripts just a thought | 1 |
634,430 | 20,361,041,434 | IssuesEvent | 2022-02-20 17:38:08 | sarweshmaharjan/track-life | https://api.github.com/repos/sarweshmaharjan/track-life | opened | As a user, I want to view detail about place where I spent (expenses) and how much i get/have (income) | Priority: Medium User Story | **Tasks**
> Frontend
- [ ] Tab UI.
- Expenses
- Income
- [ ] Show list of categories that lies within expenses/income.
- [ ] If present, show the information with the following things.
- Category name
- How much overbudget/underbudget it went.
- Total amount spent.
- Total number of transaction.
- [ ] If not present, show an empty state with message, "Oops there is nothing to display".
- [ ] Add transaction button.
- Dropdown list where user can select either "Expenses" or "Income".
- [ ] Sort according to different criteria.
- [ ] Show income left and allocated amount for each month.
- [ ] Add search bar
> Backend
- [ ] Calculate percentage of category according to total amount allocated.
- [ ] Calculate the number of transaction present within that category.
- [ ] Once click, show the list of items that are within that category.
- [ ] Sort according to latest date of item purchases date.
- [ ] Send category list.
- [ ] Show left amount and allocated amount for selected month. By default, show the current month info.
- [ ] Search categories / items.
---
**Tab Contains**
- Expenses tab.
- Categories name, percentage of underbudget, total amount spent shown in list form, total transaction.
- Income tab.
- Categories name, percentage of overbudget, total amount spent shown in list form, total transaction.
- Add button
- Add Income
- Add Expenses
---
**Priority** : Medium
---
**Estimate** : 3 days.
---
**Acceptance criteria**
- [ ] Show list of category if there is data.
- [ ] Show empty template if there is no data.
- [ ] Drop down should show and let user decide which transaction type to add.
- [ ] Clicking on the category should show the transaction list.
- [ ] Click left or right button on top will show the user data of previous month or next month.
- [ ] Badge to indicate what the current status of the expenses is.
- [ ] Clicking on month will show the list of month.
- [ ] Clicking on the year will show year list.
- [ ] Searching will show list of category or items that match with it.
--- | 1.0 | As a user, I want to view detail about place where I spent (expenses) and how much i get/have (income) - **Tasks**
> Frontend
- [ ] Tab UI.
- Expenses
- Income
- [ ] Show list of categories that lies within expenses/income.
- [ ] If present, show the information with the following things.
- Category name
- How much overbudget/underbudget it went.
- Total amount spent.
- Total number of transaction.
- [ ] If not present, show an empty state with message, "Oops there is nothing to display".
- [ ] Add transaction button.
- Dropdown list where user can select either "Expenses" or "Income".
- [ ] Sort according to different criteria.
- [ ] Show income left and allocated amount for each month.
- [ ] Add search bar
> Backend
- [ ] Calculate percentage of category according to total amount allocated.
- [ ] Calculate the number of transaction present within that category.
- [ ] Once click, show the list of items that are within that category.
- [ ] Sort according to latest date of item purchases date.
- [ ] Send category list.
- [ ] Show left amount and allocated amount for selected month. By default, show the current month info.
- [ ] Search categories / items.
---
**Tab Contains**
- Expenses tab.
- Categories name, percentage of underbudget, total amount spent shown in list form, total transaction.
- Income tab.
- Categories name, percentage of overbudget, total amount spent shown in list form, total transaction.
- Add button
- Add Income
- Add Expenses
---
**Priority** : Medium
---
**Estimate** : 3 days.
---
**Acceptance criteria**
- [ ] Show list of category if there is data.
- [ ] Show empty template if there is no data.
- [ ] Drop down should show and let user decide which transaction type to add.
- [ ] Clicking on the category should show the transaction list.
- [ ] Click left or right button on top will show the user data of previous month or next month.
- [ ] Badge to indicate what the current status of the expenses is.
- [ ] Clicking on month will show the list of month.
- [ ] Clicking on the year will show year list.
- [ ] Searching will show list of category or items that match with it.
--- | priority | as a user i want to view detail about place where i spent expenses and how much i get have income tasks frontend tab ui expenses income show list of categories that lies within expenses income if present show the information with the following things category name how much overbudget underbudget it went total amount spent total number of transaction if not present show an empty state with message oops there is nothing to display add transaction button dropdown list where user can select either expenses or income sort according to different criteria show income left and allocated amount for each month add search bar backend calculate percentage of category according to total amount allocated calculate the number of transaction present within that category once click show the list of items that are within that category sort according to latest date of item purchases date send category list show left amount and allocated amount for selected month by default show the current month info search categories items tab contains expenses tab categories name percentage of underbudget total amount spent shown in list form total transaction income tab categories name percentage of overbudget total amount spent shown in list form total transaction add button add income add expenses priority medium estimate days acceptance criteria show list of category if there is data show empty template if there is no data drop down should show and let user decide which transaction type to add clicking on the category should show the transaction list click left or right button on top will show the user data of previous month or next month badge to indicate what the current status of the expenses is clicking on month will show the list of month clicking on the year will show year list searching will show list of category or items that match with it | 1 |
262,666 | 8,272,317,318 | IssuesEvent | 2018-09-16 18:52:25 | hack4impact-uiuc/h4i-recruitment | https://api.github.com/repos/hack4impact-uiuc/h4i-recruitment | closed | Have a Page that shows all the interviews per Candidate | Priority: Medium | This page would show all the interview notes and make it easily accessible. I imagine it to be organized per Candidate, so a Candidate as a heading and then clickable interview summaries below. Maybe we should have a summary of the interviews (average score, general notes) per candidate or we can just link to the interviews.
We currently show interviews in a modal, maybe port those over to a page as well? | 1.0 | Have a Page that shows all the interviews per Candidate - This page would show all the interview notes and make it easily accessible. I imagine it to be organized per Candidate, so a Candidate as a heading and then clickable interview summaries below. Maybe we should have a summary of the interviews (average score, general notes) per candidate or we can just link to the interviews.
We currently show interviews in a modal, maybe port those over to a page as well? | priority | have a page that shows all the interviews per candidate this page would show all the interview notes and make it easily accessible i imagine it to be organized per candidate so a candidate as a heading and then clickable interview summaries below maybe we should have a summary of the interviews average score general notes per candidate or we can just link to the interviews we currently show interviews in a modal maybe port those over to a page as well | 1 |
271,609 | 8,485,737,708 | IssuesEvent | 2018-10-26 08:46:52 | cms-gem-daq-project/cmsgemos | https://api.github.com/repos/cms-gem-daq-project/cmsgemos | closed | Bug Report: linkuhaltables.sh does not create correct links. | Priority: Medium Type: Bug | <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
Cannot make uhal links with `linkuhaltables.sh` due to syntax differences between address table filenames in the release and expectation in `linkuhaltables.sh`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [X] Bug report (report an issue with the code)
- [ ] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
`linkuhaltables.sh` should conform to the syntax of files in a FW release or...
`GEM_AMC` should have an automated release procedure that maintains a filename convention. However the last two releases in `GEM_AMC` use the same syntax for the address table filenames, e.g.
- [v1.11.6](https://github.com/evka85/GEM_AMC/releases/tag/v1.11.6)
- [v1.9.4](https://github.com/evka85/GEM_AMC/releases/tag/v1.9.4)
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
```
$ ./linkuhaltables.sh $FIRMWARE_GEM/CTP7/v1.11.6/address_table_v1_11_6 ""
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__AMC.xml uhal_gem_amc_ctp7_amc.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH00.xml uhal_gem_amc_ctp7_link00.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH01.xml uhal_gem_amc_ctp7_link01.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH02.xml uhal_gem_amc_ctp7_link02.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH03.xml uhal_gem_amc_ctp7_link03.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH04.xml uhal_gem_amc_ctp7_link04.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH05.xml uhal_gem_amc_ctp7_link05.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH06.xml uhal_gem_amc_ctp7_link06.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH07.xml uhal_gem_amc_ctp7_link07.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH08.xml uhal_gem_amc_ctp7_link08.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH09.xml uhal_gem_amc_ctp7_link09.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH10.xml uhal_gem_amc_ctp7_link10.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH11.xml uhal_gem_amc_ctp7_link11.xml
```
This generates a series of broken links. Seems like the address table xml's inside the CTP7 releases
- [v1.11.6](https://github.com/evka85/GEM_AMC/releases/tag/v1.11.6)
- [v1.9.4](https://github.com/evka85/GEM_AMC/releases/tag/v1.9.4)
Use the following format:
```
$ ll address_table_v1_11_6
total 20M
-rw-rw-r--. 1 gemuser gemuser 825K Aug 29 2017 uhal_gem_amc_glib.xml
-rw-rw-r--. 1 gemuser gemuser 5.2M Aug 29 2017 uhal_gem_amc_ctp7.xml
-rw-rw-r--. 1 gemuser gemuser 172K Nov 14 21:17 gem_amc_top.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link03.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link02.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link01.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link00.xml
-rw-rw-r--. 1 gemuser gemuser 164K Nov 14 21:24 uhal_gem_amc_ctp7_amc.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link11.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link10.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link09.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link08.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link07.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link06.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link05.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link04.xml
```
But the format that `linkuhaltables.sh` is coded for uses:
```
uhal_gem_amc_ctp7_${subname}_OH${linknum}.xml
```
Specifically in the release `${subname}` is not present and `OH${linknum}` is `link${linknum}`.
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. `wget https://github.com/evka85/GEM_AMC/releases/download/v1.11.6/address_table_v1_11_6.zip`
2. `unzip address_table_v1_11_6.zip`
3. `./linkuhaltables.sh address_table_v1_11_6 ""`
## Possible Solution (for bugs)
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Change `linkuhaltables.sh` to match the syntax of the address tables in a `GEM_AMC` release format.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Cannot link uhal tables automatically on a new setup or new FW release version.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: da92e0d5493e3713307901c5afae2f83f32a1787
* Shell used: `/bin/bash`
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| 1.0 | Bug Report: linkuhaltables.sh does not create correct links. - <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
Cannot make uhal links with `linkuhaltables.sh` due to syntax differences between address table filenames in the release and expectation in `linkuhaltables.sh`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [X] Bug report (report an issue with the code)
- [ ] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
`linkuhaltables.sh` should conform to the syntax of files in a FW release or...
`GEM_AMC` should have an automated release procedure that maintains a filename convention. However the last two releases in `GEM_AMC` use the same syntax for the address table filenames, e.g.
- [v1.11.6](https://github.com/evka85/GEM_AMC/releases/tag/v1.11.6)
- [v1.9.4](https://github.com/evka85/GEM_AMC/releases/tag/v1.9.4)
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
```
$ ./linkuhaltables.sh $FIRMWARE_GEM/CTP7/v1.11.6/address_table_v1_11_6 ""
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__AMC.xml uhal_gem_amc_ctp7_amc.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH00.xml uhal_gem_amc_ctp7_link00.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH01.xml uhal_gem_amc_ctp7_link01.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH02.xml uhal_gem_amc_ctp7_link02.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH03.xml uhal_gem_amc_ctp7_link03.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH04.xml uhal_gem_amc_ctp7_link04.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH05.xml uhal_gem_amc_ctp7_link05.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH06.xml uhal_gem_amc_ctp7_link06.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH07.xml uhal_gem_amc_ctp7_link07.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH08.xml uhal_gem_amc_ctp7_link08.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH09.xml uhal_gem_amc_ctp7_link09.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH10.xml uhal_gem_amc_ctp7_link10.xml
ln -fs /data/bigdisk/GEMDAQ_Documentation/system/firmware/files/CTP7/v1.11.6/address_table_v1_11_6/uhal_gem_amc_ctp7__OH11.xml uhal_gem_amc_ctp7_link11.xml
```
This generates a series of broken links. Seems like the address table xml's inside the CTP7 releases
- [v1.11.6](https://github.com/evka85/GEM_AMC/releases/tag/v1.11.6)
- [v1.9.4](https://github.com/evka85/GEM_AMC/releases/tag/v1.9.4)
Use the following format:
```
$ ll address_table_v1_11_6
total 20M
-rw-rw-r--. 1 gemuser gemuser 825K Aug 29 2017 uhal_gem_amc_glib.xml
-rw-rw-r--. 1 gemuser gemuser 5.2M Aug 29 2017 uhal_gem_amc_ctp7.xml
-rw-rw-r--. 1 gemuser gemuser 172K Nov 14 21:17 gem_amc_top.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link03.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link02.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link01.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link00.xml
-rw-rw-r--. 1 gemuser gemuser 164K Nov 14 21:24 uhal_gem_amc_ctp7_amc.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link11.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link10.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link09.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link08.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link07.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link06.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link05.xml
-rw-rw-r--. 1 gemuser gemuser 1.1M Nov 14 21:24 uhal_gem_amc_ctp7_link04.xml
```
But the format that `linkuhaltables.sh` is coded for uses:
```
uhal_gem_amc_ctp7_${subname}_OH${linknum}.xml
```
Specifically in the release `${subname}` is not present and `OH${linknum}` is `link${linknum}`.
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. `wget https://github.com/evka85/GEM_AMC/releases/download/v1.11.6/address_table_v1_11_6.zip`
2. `unzip address_table_v1_11_6.zip`
3. `./linkuhaltables.sh address_table_v1_11_6 ""`
## Possible Solution (for bugs)
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Change `linkuhaltables.sh` to match the syntax of the address tables in a `GEM_AMC` release format.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Cannot link uhal tables automatically on a new setup or new FW release version.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: da92e0d5493e3713307901c5afae2f83f32a1787
* Shell used: `/bin/bash`
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| priority | bug report linkuhaltables sh does not create correct links brief summary of issue cannot make uhal links with linkuhaltables sh due to syntax differences between address table filenames in the release and expectation in linkuhaltables sh types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior linkuhaltables sh should conform to the syntax of files in a fw release or gem amc should have an automated release procedure that maintains a filename convention however the last two releases in gem amc use the same syntax for the address table filenames e g current behavior linkuhaltables sh firmware gem address table ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc amc xml uhal gem amc amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml ln fs data bigdisk gemdaq documentation system firmware files address table uhal gem amc xml uhal gem amc xml this generates a series of broken links seems like the address table xml s inside the releases use the following format ll address table total rw rw r gemuser gemuser aug uhal gem amc glib xml rw rw r gemuser gemuser aug uhal gem amc xml rw rw r gemuser gemuser nov gem amc top xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml rw rw r gemuser gemuser nov uhal gem amc xml but the format that linkuhaltables sh is coded for uses uhal gem amc subname oh linknum xml specifically in the release subname is not present and oh linknum is link linknum steps to reproduce for bugs wget unzip address table zip linkuhaltables sh address table possible solution for bugs change linkuhaltables sh to match the syntax of the address tables in a gem amc release format context cannot link uhal tables automatically on a new setup or new fw release version your environment version used shell used bin bash | 1 |
294,240 | 9,014,723,777 | IssuesEvent | 2019-02-05 23:23:47 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | arm_mpu: _get_region_attr() needs improvement | area: ARM area: Memory Protection enhancement priority: medium | This function doesn't bounds-check its arguments, the arguments don't exactly correspond to the offsets in the technical documentation (for example, the enable bit is included in the size parameter and the region parameters are pre-shifted) and hard-coded values for shift offsets are used instead of defines from CMSIS.
add __ASSERTS for bounds checks, use CMSIS for offsets/masks if possible | 1.0 | arm_mpu: _get_region_attr() needs improvement - This function doesn't bounds-check its arguments, the arguments don't exactly correspond to the offsets in the technical documentation (for example, the enable bit is included in the size parameter and the region parameters are pre-shifted) and hard-coded values for shift offsets are used instead of defines from CMSIS.
add __ASSERTS for bounds checks, use CMSIS for offsets/masks if possible | priority | arm mpu get region attr needs improvement this function doesn t bounds check its arguments the arguments don t exactly correspond to the offsets in the technical documentation for example the enable bit is included in the size parameter and the region parameters are pre shifted and hard coded values for shift offsets are used instead of defines from cmsis add asserts for bounds checks use cmsis for offsets masks if possible | 1 |
29,479 | 2,716,138,534 | IssuesEvent | 2015-04-10 17:16:18 | CruxFramework/crux | https://api.github.com/repos/CruxFramework/crux | closed | Lack of styles for widget Timer | bug imported Milestone-M14-C3 Priority-Medium TargetVersion-5.1.1 | _From [claudio....@cruxframework.org](https://code.google.com/u/102254381191677355567/) on June 13, 2014 17:35:05_
Lack of styles for widget Timer
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=408_ | 1.0 | Lack of styles for widget Timer - _From [claudio....@cruxframework.org](https://code.google.com/u/102254381191677355567/) on June 13, 2014 17:35:05_
Lack of styles for widget Timer
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=408_ | priority | lack of styles for widget timer from on june lack of styles for widget timer original issue | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.