Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
479,441 | 13,796,699,965 | IssuesEvent | 2020-10-09 20:19:32 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | closed | Delay JS - IE11: Object doesn't support property or method 'forEach' | effort: [XS] module: file optimization priority: medium type: bug | **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version ✅
- Used the search feature to ensure that the bug hasn’t been reported before ✅
**Describe the bug**
The ""rocket-delay-js-js-after" script we use contains a `forEach` method:
https://github.com/wp-media/wp-rocket/blob/7e8d54aaf5ad8ddffda404d7011931f1a43f3a53/assets/js/lazyload-scripts.js#L59
That's not supported on IE11 when using ES6, and results in the following error:
```JavaScript
Object doesn't support property or method 'forEach'
```
which breaks the feature.
I'm quoting @engahmeds3ed who was kind enough to clarify the "why" this doesn't work in IE11:
> forEach() is actually working on IE11, just be careful on how you call it.
> querySelectorAll() is a method which return a NodeList. And on Internet Explorer, foreach() only works on Array objects. (It works with NodeList with ES6, not supported by IE11).
**To Reproduce**
Steps to reproduce the behavior:
1. Enable **Delay JavaScript execution**.
2. Visit a site using IE11 on your PC or **browserstack.com** .
3. Open the JavaScript console.
4. See error.
**Expected behavior**
The feature should work on IE.
**Screenshots**

**Additional context**
**Related ticket:** https://secure.helpscout.net/conversation/1287835559/196361?folderId=2135277
**Potential solution:** https://rimdev.io/foreach-for-ie-11/
**IE11 worldwide market share:** [2.1% the previous year]( https://gs.statcounter.com/browser-version-market-share/desktop/worldwide/#monthly-201908-202008)
**Backlog Grooming (for WP Media dev team use only)**
- [x] Reproduce the problem
- [x] Identify the root cause
- [x] Scope a solution
- [x] Estimate the effort
| 1.0 | Delay JS - IE11: Object doesn't support property or method 'forEach' - **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version ✅
- Used the search feature to ensure that the bug hasn’t been reported before ✅
**Describe the bug**
The ""rocket-delay-js-js-after" script we use contains a `forEach` method:
https://github.com/wp-media/wp-rocket/blob/7e8d54aaf5ad8ddffda404d7011931f1a43f3a53/assets/js/lazyload-scripts.js#L59
That's not supported on IE11 when using ES6, and results in the following error:
```JavaScript
Object doesn't support property or method 'forEach'
```
which breaks the feature.
I'm quoting @engahmeds3ed who was kind enough to clarify the "why" this doesn't work in IE11:
> forEach() is actually working on IE11, just be careful on how you call it.
> querySelectorAll() is a method which return a NodeList. And on Internet Explorer, foreach() only works on Array objects. (It works with NodeList with ES6, not supported by IE11).
**To Reproduce**
Steps to reproduce the behavior:
1. Enable **Delay JavaScript execution**.
2. Visit a site using IE11 on your PC or **browserstack.com** .
3. Open the JavaScript console.
4. See error.
**Expected behavior**
The feature should work on IE.
**Screenshots**

**Additional context**
**Related ticket:** https://secure.helpscout.net/conversation/1287835559/196361?folderId=2135277
**Potential solution:** https://rimdev.io/foreach-for-ie-11/
**IE11 worldwide market share:** [2.1% the previous year]( https://gs.statcounter.com/browser-version-market-share/desktop/worldwide/#monthly-201908-202008)
**Backlog Grooming (for WP Media dev team use only)**
- [x] Reproduce the problem
- [x] Identify the root cause
- [x] Scope a solution
- [x] Estimate the effort
| priority | delay js object doesn t support property or method foreach before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version ✅ used the search feature to ensure that the bug hasn’t been reported before ✅ describe the bug the rocket delay js js after script we use contains a foreach method that s not supported on when using and results in the following error javascript object doesn t support property or method foreach which breaks the feature i m quoting who was kind enough to clarify the why this doesn t work in foreach is actually working on just be careful on how you call it queryselectorall is a method which return a nodelist and on internet explorer foreach only works on array objects it works with nodelist with not supported by to reproduce steps to reproduce the behavior enable delay javascript execution visit a site using on your pc or browserstack com open the javascript console see error expected behavior the feature should work on ie screenshots additional context related ticket potential solution worldwide market share backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort | 1 |
246,416 | 7,895,200,517 | IssuesEvent | 2018-06-29 01:43:39 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Rebuild fastbit and h5part for the 2.13.0 thirdparty_shared libraries at LLNL. | Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: Normal Support Group: Any version: 2.12.3 | Allen updated bv_fastbit and bv_h5part on the trunk, so we should rebuild those libraries for our 2.13.0 thirdparty_shared libraries.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 01/24/2017 04:38 pm
Original update: 03/01/2018 01:11 pm
Ticket number: 2743 | 1.0 | Rebuild fastbit and h5part for the 2.13.0 thirdparty_shared libraries at LLNL. - Allen updated bv_fastbit and bv_h5part on the trunk, so we should rebuild those libraries for our 2.13.0 thirdparty_shared libraries.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 01/24/2017 04:38 pm
Original update: 03/01/2018 01:11 pm
Ticket number: 2743 | priority | rebuild fastbit and for the thirdparty shared libraries at llnl allen updated bv fastbit and bv on the trunk so we should rebuild those libraries for our thirdparty shared libraries redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author eric brugger original creation pm original update pm ticket number | 1 |
523,988 | 15,193,567,389 | IssuesEvent | 2021-02-16 01:04:28 | code4lib/2021.code4lib.org | https://api.github.com/repos/code4lib/2021.code4lib.org | closed | Update Conference Schedule | Priority: Medium Status: In Progress | Kathy provided an extract from Whova with finalized schedule. I'll update the information on the website over the next few days. | 1.0 | Update Conference Schedule - Kathy provided an extract from Whova with finalized schedule. I'll update the information on the website over the next few days. | priority | update conference schedule kathy provided an extract from whova with finalized schedule i ll update the information on the website over the next few days | 1 |
451,696 | 13,040,314,999 | IssuesEvent | 2020-07-28 18:17:21 | dnnsoftware/Dnn.Platform | https://api.github.com/repos/dnnsoftware/Dnn.Platform | closed | Site Import/Export fails due to pre-filled Server name on task | Area: AE > PersonaBar Ext > SiteImportExport.Web Effort: Medium Priority: Medium Status: Ready for Development Type: Bug | <!--
Please read contribution guideline first: https://github.com/dnnsoftware/Dnn.Platform/blob/development/CONTRIBUTING.md
Any potential security issues should be sent to security@dnnsoftware.com, rather than posted on GitHub
-->
## Description of bug
On a newly installed site, the Servers box on the Site Import/Export scheduled task is pre-filled with the name of the server DNN was installed on. However, on an Azure App Service the server name frequently changes (e.g. whenever you restart the app service) which causes the scheduled task to fail.
Therefore we should consider leaving the Servers box blank (meaning run this task on the current server) on a fresh installation.
## Steps to reproduce
List the steps to reproduce the behavior:
1. Install DNN 9.2.2 on an Azure App Service
2. Restart the App Service, so the server name changes
3. Kick off a Site Export from PersonaBar > Settings > Site Import/Export
4. Note how the status moves to In Progress, but never completes
5. Visit PersonaBar > Settings > Scheduler
6. Edit the Site Import/Export scheduled task, removing anything from the Servers box.
7. Re-run the task
8. Re-visit PersonaBar > Settings > Site Import/Export and note the task will now run to completion
## Current result
Explain what the current result is.
## Expected result
Provide a clear and concise description of what you expected to happen.
## Screenshots

## Affected version
<!-- Check all that apply and add more if necessary -->
* [x] 9.2.2
(I haven't tested older versions)
## Affected browser
n/a
| 1.0 | Site Import/Export fails due to pre-filled Server name on task - <!--
Please read contribution guideline first: https://github.com/dnnsoftware/Dnn.Platform/blob/development/CONTRIBUTING.md
Any potential security issues should be sent to security@dnnsoftware.com, rather than posted on GitHub
-->
## Description of bug
On a newly installed site, the Servers box on the Site Import/Export scheduled task is pre-filled with the name of the server DNN was installed on. However, on an Azure App Service the server name frequently changes (e.g. whenever you restart the app service) which causes the scheduled task to fail.
Therefore we should consider leaving the Servers box blank (meaning run this task on the current server) on a fresh installation.
## Steps to reproduce
List the steps to reproduce the behavior:
1. Install DNN 9.2.2 on an Azure App Service
2. Restart the App Service, so the server name changes
3. Kick off a Site Export from PersonaBar > Settings > Site Import/Export
4. Note how the status moves to In Progress, but never completes
5. Visit PersonaBar > Settings > Scheduler
6. Edit the Site Import/Export scheduled task, removing anything from the Servers box.
7. Re-run the task
8. Re-visit PersonaBar > Settings > Site Import/Export and note the task will now run to completion
## Current result
Explain what the current result is.
## Expected result
Provide a clear and concise description of what you expected to happen.
## Screenshots

## Affected version
<!-- Check all that apply and add more if necessary -->
* [x] 9.2.2
(I haven't tested older versions)
## Affected browser
n/a
| priority | site import export fails due to pre filled server name on task please read contribution guideline first any potential security issues should be sent to security dnnsoftware com rather than posted on github description of bug on a newly installed site the servers box on the site import export scheduled task is pre filled with the name of the server dnn was installed on however on an azure app service the server name frequently changes e g whenever you restart the app service which causes the scheduled task to fail therefore we should consider leaving the servers box blank meaning run this task on the current server on a fresh installation steps to reproduce list the steps to reproduce the behavior install dnn on an azure app service restart the app service so the server name changes kick off a site export from personabar settings site import export note how the status moves to in progress but never completes visit personabar settings scheduler edit the site import export scheduled task removing anything from the servers box re run the task re visit personabar settings site import export and note the task will now run to completion current result explain what the current result is expected result provide a clear and concise description of what you expected to happen screenshots affected version i haven t tested older versions affected browser n a | 1 |
502,453 | 14,546,497,472 | IssuesEvent | 2020-12-15 21:20:00 | rubyforgood/casa | https://api.github.com/repos/rubyforgood/casa | closed | remove Case Contacts view from Supervisor and Admin dashboards | :clipboard: Supervisor :crown: Admin Priority: Medium | **What type of user is this for? volunteer/supervisor/admin/all OR All CASA Admin**
admins and supervisors
**Description**
The Case Contacts view can be removed for both admins and supervisors. They can already see this data by clicking on a `casa_case`, or by generating a report.
**Screenshots of current behavior, if any**
This is what the Case Contacts view looks like, except it goes on forever because there are many case contacts associated with each case. It is no longer needed.
<img width="1324" alt="Screen Shot 2020-10-24 at 3 23 33 PM" src="https://user-images.githubusercontent.com/62810851/97094794-050c2500-160d-11eb-8386-af6615fc7d3a.png">
| 1.0 | remove Case Contacts view from Supervisor and Admin dashboards - **What type of user is this for? volunteer/supervisor/admin/all OR All CASA Admin**
admins and supervisors
**Description**
The Case Contacts view can be removed for both admins and supervisors. They can already see this data by clicking on a `casa_case`, or by generating a report.
**Screenshots of current behavior, if any**
This is what the Case Contacts view looks like, except it goes on forever because there are many case contacts associated with each case. It is no longer needed.
<img width="1324" alt="Screen Shot 2020-10-24 at 3 23 33 PM" src="https://user-images.githubusercontent.com/62810851/97094794-050c2500-160d-11eb-8386-af6615fc7d3a.png">
| priority | remove case contacts view from supervisor and admin dashboards what type of user is this for volunteer supervisor admin all or all casa admin admins and supervisors description the case contacts view can be removed for both admins and supervisors they can already see this data by clicking on a casa case or by generating a report screenshots of current behavior if any this is what the case contacts view looks like except it goes on forever because there are many case contacts associated with each case it is no longer needed img width alt screen shot at pm src | 1 |
83,841 | 3,643,817,670 | IssuesEvent | 2016-02-15 05:41:01 | vega/vega-lite | https://api.github.com/repos/vega/vega-lite | opened | Do not output stroke property e.g., strokeWidth when there is no stroke fill. | bug Priority/3-Medium | (I guess the same has to be true for fill.) | 1.0 | Do not output stroke property e.g., strokeWidth when there is no stroke fill. - (I guess the same has to be true for fill.) | priority | do not output stroke property e g strokewidth when there is no stroke fill i guess the same has to be true for fill | 1 |
229,490 | 7,575,123,816 | IssuesEvent | 2018-04-23 23:54:25 | adrn/gala | https://api.github.com/repos/adrn/gala | closed | Add "fast" option to pericenter/apocenter and support multiple orbits | bug enhancement priority:medium | Right now, `.pericenter()` and `.apocenter()` are slow because they do interpolation to figure out a precise value. There should be a `fast=True` option that skips the interpolation.
We also need to support these methods for multiple orbits in the same object. | 1.0 | Add "fast" option to pericenter/apocenter and support multiple orbits - Right now, `.pericenter()` and `.apocenter()` are slow because they do interpolation to figure out a precise value. There should be a `fast=True` option that skips the interpolation.
We also need to support these methods for multiple orbits in the same object. | priority | add fast option to pericenter apocenter and support multiple orbits right now pericenter and apocenter are slow because they do interpolation to figure out a precise value there should be a fast true option that skips the interpolation we also need to support these methods for multiple orbits in the same object | 1 |
547,798 | 16,047,641,826 | IssuesEvent | 2021-04-22 15:16:47 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | opened | UI of combine js needs to match design when import settings having delay js and combine js on | module: combine JS module: tools priority: medium severity: minor type: bug | **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version => Y
- Used the search feature to ensure that the bug hasn’t been reported before => Y
**Describe the bug**
UI of combine js needs to match design when import settings having delay js and combine js on
**To Reproduce**
Precondition:
- exported settings exist from version < 3.9 with delay js and combine js on
- WPR 3.9 installed and activated
Steps to reproduce the behavior:
1. Go to the tools tab and import settings
2. open file optimization tab
3. check the UI for combine js
**Expected behavior**
Combine js is dimmed and unchecked
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Expected is

**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| 1.0 | UI of combine js needs to match design when import settings having delay js and combine js on - **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version => Y
- Used the search feature to ensure that the bug hasn’t been reported before => Y
**Describe the bug**
UI of combine js needs to match design when import settings having delay js and combine js on
**To Reproduce**
Precondition:
- exported settings exist from version < 3.9 with delay js and combine js on
- WPR 3.9 installed and activated
Steps to reproduce the behavior:
1. Go to the tools tab and import settings
2. open file optimization tab
3. check the UI for combine js
**Expected behavior**
Combine js is dimmed and unchecked
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Expected is

**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| priority | ui of combine js needs to match design when import settings having delay js and combine js on before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version y used the search feature to ensure that the bug hasn’t been reported before y describe the bug ui of combine js needs to match design when import settings having delay js and combine js on to reproduce precondition exported settings exist from version with delay js and combine js on wpr installed and activated steps to reproduce the behavior go to the tools tab and import settings open file optimization tab check the ui for combine js expected behavior combine js is dimmed and unchecked screenshots if applicable add screenshots to help explain your problem additional context expected is backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort | 1 |
77,019 | 3,506,248,209 | IssuesEvent | 2016-01-08 04:59:07 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Player limit (BB #79) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 20.03.2010 16:19:10 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/79
<hr>
You're not allowed to set negative values for GMs.
Like, .server plimit -2 would make only Game Masters able to access the server but it simply ignores that and lets everyone still in. | 1.0 | Player limit (BB #79) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 20.03.2010 16:19:10 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/79
<hr>
You're not allowed to set negative values for GMs.
Like, .server plimit -2 would make only Game Masters able to access the server but it simply ignores that and lets everyone still in. | priority | player limit bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link you re not allowed to set negative values for gms like server plimit would make only game masters able to access the server but it simply ignores that and lets everyone still in | 1 |
744,299 | 25,937,559,190 | IssuesEvent | 2022-12-16 15:24:31 | trimble-oss/website-modus.trimble.com | https://api.github.com/repos/trimble-oss/website-modus.trimble.com | closed | Submission guidelines - move them to GitHub | 2 priority:medium | RE: https://modus.trimble.com/community/submission-guidelines/
We could move the form and instructions to GitHub as a template. | 1.0 | Submission guidelines - move them to GitHub - RE: https://modus.trimble.com/community/submission-guidelines/
We could move the form and instructions to GitHub as a template. | priority | submission guidelines move them to github re we could move the form and instructions to github as a template | 1 |
698,884 | 23,995,724,738 | IssuesEvent | 2022-09-14 07:22:50 | redhat-developer/odo | https://api.github.com/repos/redhat-developer/odo | closed | add `app.openshift.io/runtime` label to resources created by odo | priority/Medium kind/user-story | /kind user-story
## User Story
As an odo and ODC user, I want to see language/framework icons in ODC topology view So that I can quickly recognize what is running and so that I have a consistent view with components created by ODC.
<img width="573" alt="Screenshot 2022-08-22 at 17 12 39" src="https://user-images.githubusercontent.com/57206/185956224-eeba324e-6057-422a-9675-9c1de05738ef.png">
The left is the component with the correct labels deployed with ODC, the right is nodejs component deployed using `odo dev`
## Acceptance Criteria
- [ ] `odo dev` should add `app.openshift.io/runtime` label to all resources created with value of `metadata.language` field in devfile.yaml
- [ ] `odo deploy` should add `app.openshift.io/runtime` label to all resources created with value of `metadata.language` field in devfile.yaml
/kind user-story
/priority medium
| 1.0 | add `app.openshift.io/runtime` label to resources created by odo - /kind user-story
## User Story
As an odo and ODC user, I want to see language/framework icons in ODC topology view So that I can quickly recognize what is running and so that I have a consistent view with components created by ODC.
<img width="573" alt="Screenshot 2022-08-22 at 17 12 39" src="https://user-images.githubusercontent.com/57206/185956224-eeba324e-6057-422a-9675-9c1de05738ef.png">
The left is the component with the correct labels deployed with ODC, the right is nodejs component deployed using `odo dev`
## Acceptance Criteria
- [ ] `odo dev` should add `app.openshift.io/runtime` label to all resources created with value of `metadata.language` field in devfile.yaml
- [ ] `odo deploy` should add `app.openshift.io/runtime` label to all resources created with value of `metadata.language` field in devfile.yaml
/kind user-story
/priority medium
| priority | add app openshift io runtime label to resources created by odo kind user story user story as an odo and odc user i want to see language framework icons in odc topology view so that i can quickly recognize what is running and so that i have a consistent view with components created by odc img width alt screenshot at src the left is the component with the correct labels deployed with odc the right is nodejs component deployed using odo dev acceptance criteria odo dev should add app openshift io runtime label to all resources created with value of metadata language field in devfile yaml odo deploy should add app openshift io runtime label to all resources created with value of metadata language field in devfile yaml kind user story priority medium | 1 |
264,250 | 8,306,910,939 | IssuesEvent | 2018-09-23 00:53:23 | pennmush/pennmush | https://api.github.com/repos/pennmush/pennmush | closed | File descriptor leak with curl | Component-HTTP bug priority medium | Doing a `@shutdown/reboot` when there are outstanding `@http` requests causes the sockets being used for those to be forgotten but still left open. This will eventually take up all available descriptors.
Fix (When I have a few minutes to write it) will probably involve setting the CLOEXEC flag on those descriptors. | 1.0 | File descriptor leak with curl - Doing a `@shutdown/reboot` when there are outstanding `@http` requests causes the sockets being used for those to be forgotten but still left open. This will eventually take up all available descriptors.
Fix (When I have a few minutes to write it) will probably involve setting the CLOEXEC flag on those descriptors. | priority | file descriptor leak with curl doing a shutdown reboot when there are outstanding http requests causes the sockets being used for those to be forgotten but still left open this will eventually take up all available descriptors fix when i have a few minutes to write it will probably involve setting the cloexec flag on those descriptors | 1 |
204,365 | 7,087,353,331 | IssuesEvent | 2018-01-11 17:31:05 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Apply Status to Case Updates | Fix Proposed Medium Priority Resolved: Next Release bug | <!--- Provide a general summary of the issue in the **Title** above -->
<!--- Before you open an issue, please check if a similar issue already exists or has been closed before. --->
#### Issue
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
At AOP and Case flow.
In the area "Admin>AOP Setting > Case Status Changes" not running.
#### Expected Behavior
<!--- Tell us what should happen -->
When a new case update exists, the system must apply the state changes as established
#### Actual Behavior
<!--- Tell us what happens instead -->
Not change
#### Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Ubicación: .../modules/AOP_Case_Updates/CaseUpdateHook.php
Clase, funcion: updateCaseStatus (line 299)
```
/* PPW - CODe - Modifiación CORE */
// if (!empty($case->id)) {
if (empty($case->id)) {
```
NOTE: (Deleted "!")
#### Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Go to Admin > Email Settings and configure smtp mail
2. Go to Admin > Inbound Email and configure inbound mail
3. Go to Admin > AOP Setting > Enable AOP and configure "Case Status Changes"
4. Go to Cases > Creare a update case
5. Go to email final client and repply mail
6. Go to the case and verify that the status has not been changed
#### Context
<!--- How has this bug affected you? What were you trying to accomplish? -->
<!--- If you feel this should be a low/medium/high priority then please state so -->
Customer Support. High priority
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.8.2
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): All
* Environment name and version (e.g. MySQL, PHP 7): Mysql/MariaDb, PHP7
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 16.04
| 1.0 | Apply Status to Case Updates - <!--- Provide a general summary of the issue in the **Title** above -->
<!--- Before you open an issue, please check if a similar issue already exists or has been closed before. --->
#### Issue
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
At AOP and Case flow.
In the area "Admin>AOP Setting > Case Status Changes" not running.
#### Expected Behavior
<!--- Tell us what should happen -->
When a new case update exists, the system must apply the state changes as established
#### Actual Behavior
<!--- Tell us what happens instead -->
Not change
#### Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Ubicación: .../modules/AOP_Case_Updates/CaseUpdateHook.php
Clase, funcion: updateCaseStatus (line 299)
```
/* PPW - CODe - Modifiación CORE */
// if (!empty($case->id)) {
if (empty($case->id)) {
```
NOTE: (Deleted "!")
#### Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Go to Admin > Email Settings and configure smtp mail
2. Go to Admin > Inbound Email and configure inbound mail
3. Go to Admin > AOP Setting > Enable AOP and configure "Case Status Changes"
4. Go to Cases > Creare a update case
5. Go to email final client and repply mail
6. Go to the case and verify that the status has not been changed
#### Context
<!--- How has this bug affected you? What were you trying to accomplish? -->
<!--- If you feel this should be a low/medium/high priority then please state so -->
Customer Support. High priority
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.8.2
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): All
* Environment name and version (e.g. MySQL, PHP 7): Mysql/MariaDb, PHP7
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 16.04
| priority | apply status to case updates issue at aop and case flow in the area admin aop setting case status changes not running expected behavior when a new case update exists the system must apply the state changes as established actual behavior not change possible fix ubicación modules aop case updates caseupdatehook php clase funcion updatecasestatus line ppw code modifiación core if empty case id if empty case id note deleted steps to reproduce go to admin email settings and configure smtp mail go to admin inbound email and configure inbound mail go to admin aop setting enable aop and configure case status changes go to cases creare a update case go to email final client and repply mail go to the case and verify that the status has not been changed context customer support high priority your environment suitecrm version used browser name and version e g chrome version bit all environment name and version e g mysql php mysql mariadb operating system and version e g ubuntu ubuntu | 1 |
331,181 | 10,061,395,662 | IssuesEvent | 2019-07-22 21:13:59 | svof/svof | https://api.github.com/repos/svof/svof | closed | Monk Transmute | bug confirmed futher analysis needed in-client medium priority | Forwarded by Andraste. Monk transmute apparently is having some issues so will need to have a look at it. | 1.0 | Monk Transmute - Forwarded by Andraste. Monk transmute apparently is having some issues so will need to have a look at it. | priority | monk transmute forwarded by andraste monk transmute apparently is having some issues so will need to have a look at it | 1 |
298,339 | 9,199,015,802 | IssuesEvent | 2019-03-07 14:04:06 | cms-gem-daq-project/gem-plotting-tools | https://api.github.com/repos/cms-gem-daq-project/gem-plotting-tools | closed | Bug Report: anaDACScans.py generates KeyError if OH0 not in chamber_config | Priority: Medium Status: Help Wanted Type: Bug | <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
`anaDACScans.py` throws a `KeyError` if `0` (OH0) is not in the list of keys for `chamber_config`.
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [X] Bug report (report an issue with the code)
- [ ] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
This should not throw.
I can imagine two cases:
1. data is taken and OH0 actually existed in the data but for some reason `chamber_config` was not updated correctly,
2. OH0 is assigned as a default for the case of missing links and exists as a dummy.
In case 1 we might want to still analyze this data and place it in some "uncategorised" location. In case 2 perhaps the GEM Tree format should be updated to not set this default link...
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
These lines will generate a `KeyError` if `0` is not in `chamber_config` dictionary of `chamberInfo.py`:
https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/390a76897576eb0eba97eb8286c644b418891025/anaDACScan.py#L106-L112
## Possible Solution (for bugs)
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Set the chamber name as follows:
```python
if oh in chamber_config.keys():
cName = chamber_config[oh]
else
cName = "unknown"
```
Then `cName` is passed to `runCommand` instead of `chamber_config[oh]`. Not sure if this is the best solution though.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Prevents data analysis of DAC scans.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: 59ce1e4cf86c0c9a949969f2df22958a0b8ed43f
* Shell used: `zsh`
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| 1.0 | Bug Report: anaDACScans.py generates KeyError if OH0 not in chamber_config - <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
`anaDACScans.py` throws a `KeyError` if `0` (OH0) is not in the list of keys for `chamber_config`.
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [X] Bug report (report an issue with the code)
- [ ] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
This should not throw.
I can imagine two cases:
1. data is taken and OH0 actually existed in the data but for some reason `chamber_config` was not updated correctly,
2. OH0 is assigned as a default for the case of missing links and exists as a dummy.
In case 1 we might want to still analyze this data and place it in some "uncategorised" location. In case 2 perhaps the GEM Tree format should be updated to not set this default link...
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
These lines will generate a `KeyError` if `0` is not in `chamber_config` dictionary of `chamberInfo.py`:
https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/390a76897576eb0eba97eb8286c644b418891025/anaDACScan.py#L106-L112
## Possible Solution (for bugs)
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Set the chamber name as follows:
```python
if oh in chamber_config.keys():
cName = chamber_config[oh]
else
cName = "unknown"
```
Then `cName` is passed to `runCommand` instead of `chamber_config[oh]`. Not sure if this is the best solution though.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Prevents data analysis of DAC scans.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: 59ce1e4cf86c0c9a949969f2df22958a0b8ed43f
* Shell used: `zsh`
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| priority | bug report anadacscans py generates keyerror if not in chamber config brief summary of issue anadacscans py throws a keyerror if is not in the list of keys for chamber config types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior this should not throw i can imagine two cases data is taken and actually existed in the data but for some reason chamber config was not updated correctly is assigned as a default for the case of missing links and exists as a dummy in case we might want to still analyze this data and place it in some uncategorised location in case perhaps the gem tree format should be updated to not set this default link current behavior these lines will generate a keyerror if is not in chamber config dictionary of chamberinfo py possible solution for bugs set the chamber name as follows python if oh in chamber config keys cname chamber config else cname unknown then cname is passed to runcommand instead of chamber config not sure if this is the best solution though context for feature requests prevents data analysis of dac scans your environment version used shell used zsh | 1 |
696,087 | 23,883,832,122 | IssuesEvent | 2022-09-08 05:33:10 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Need an ingame way to list values for debugging | Priority: 3-Not Required Issue: Feature Request Difficulty: 2-Medium | Even if the UI is just generic and it requires custom code to fill in the data. Extremely useful for balancing being able to see everything at once.
E.g.
List all melee weapons with their cooldowns, range, arcs, etc. | 1.0 | Need an ingame way to list values for debugging - Even if the UI is just generic and it requires custom code to fill in the data. Extremely useful for balancing being able to see everything at once.
E.g.
List all melee weapons with their cooldowns, range, arcs, etc. | priority | need an ingame way to list values for debugging even if the ui is just generic and it requires custom code to fill in the data extremely useful for balancing being able to see everything at once e g list all melee weapons with their cooldowns range arcs etc | 1 |
16,861 | 2,615,125,371 | IssuesEvent | 2015-03-01 05:53:28 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | opened | A very small code required to obtain accessToken from Authorization Token | auto-migrated Priority-Medium Type-Sample | ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
Google Analytics Reporting for Android
What format (e.g. JSON, Atom)?
JSON , Java
What Authentation (e.g. OAuth, OAuth 2, ClientLogin)?
OAuth 2
Java environment (e.g. Java 6, Android 2.3, App Engine)?
Java 6, Android 2.3+
External references, such as API reference guide?
Please provide any additional information below.
The main problem I am facing is I have successfully obtained the Authorization
Token using Account manager for Analytics .But There is no single code of
getting of AccessTOken from the authorization token . Please Help !
```
Original issue reported on code.google.com by `abdulreh...@gmail.com` on 27 Mar 2012 at 7:23 | 1.0 | A very small code required to obtain accessToken from Authorization Token - ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
Google Analytics Reporting for Android
What format (e.g. JSON, Atom)?
JSON , Java
What Authentation (e.g. OAuth, OAuth 2, ClientLogin)?
OAuth 2
Java environment (e.g. Java 6, Android 2.3, App Engine)?
Java 6, Android 2.3+
External references, such as API reference guide?
Please provide any additional information below.
The main problem I am facing is I have successfully obtained the Authorization
Token using Account manager for Analytics .But There is no single code of
getting of AccessTOken from the authorization token . Please Help !
```
Original issue reported on code.google.com by `abdulreh...@gmail.com` on 27 Mar 2012 at 7:23 | priority | a very small code required to obtain accesstoken from authorization token which google api and version e g google calendar data api version google analytics reporting for android what format e g json atom json java what authentation e g oauth oauth clientlogin oauth java environment e g java android app engine java android external references such as api reference guide please provide any additional information below the main problem i am facing is i have successfully obtained the authorization token using account manager for analytics but there is no single code of getting of accesstoken from the authorization token please help original issue reported on code google com by abdulreh gmail com on mar at | 1 |
520,740 | 15,091,994,840 | IssuesEvent | 2021-02-06 17:45:36 | KoderKow/twitchr | https://api.github.com/repos/KoderKow/twitchr | closed | Get Clips | Difficulty: [2] Intermediate Effort: [2] Medium Priority: [1] Low Type: ★ Enhancement | Gets clip information by clip ID (one or more), broadcaster ID (one only), or game ID (one only).
Note: The clips service returns a maximum of 1000 clips.
The response has a JSON payload with a data field containing an array of clip information elements and a pagination field containing information required to query for more streams.
https://dev.twitch.tv/docs/api/reference#get-clips | 1.0 | Get Clips - Gets clip information by clip ID (one or more), broadcaster ID (one only), or game ID (one only).
Note: The clips service returns a maximum of 1000 clips.
The response has a JSON payload with a data field containing an array of clip information elements and a pagination field containing information required to query for more streams.
https://dev.twitch.tv/docs/api/reference#get-clips | priority | get clips gets clip information by clip id one or more broadcaster id one only or game id one only note the clips service returns a maximum of clips the response has a json payload with a data field containing an array of clip information elements and a pagination field containing information required to query for more streams | 1 |
26,002 | 2,684,094,621 | IssuesEvent | 2015-03-28 17:05:02 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Некорректная интерпретация кодовой страницы EchoX | 1 star bug imported invalid Priority-Medium | _From [gigaplas...@gmail.com](https://code.google.com/u/106336574353395140522/) on June 03, 2012 00:23:30_
Windows 7 SP1 x86 ConEmu 120417 x86
Far 2.0.1777 x86
При использовании ConEmu некорректно отображаются кириллические символы, выводимые утилитой EchoX в кодовой странице 1251 (предварительно выполнена команда "chcp 1251"), при этом в кодовой странице 866 (команда chcp не выполняется) кириллица отображается корректно. "Чистый" Far отображает всё корректно.
Поведение замечено на указанной версии ConEmu , пробовал делать откаты вплоть до 110308a - поведение совпадает для всех промежуточных версий, включая 110308a. Пакетный файл, иллюстрирующий поведение, прилагается.
EchoX является частью пакета Shell Scripting Toolkit http://www.westmesatech.com/sst.html используется последняя версия пакета (sst27).
Опция "Inject ConEmuHk " включена. (Суб)плагины Background, Lines, Thumbs удалены (не используются - хотя, думаю, это неважно).
**Attachment:** [test-cp.7z](http://code.google.com/p/conemu-maximus5/issues/detail?id=565)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=565_ | 1.0 | Некорректная интерпретация кодовой страницы EchoX - _From [gigaplas...@gmail.com](https://code.google.com/u/106336574353395140522/) on June 03, 2012 00:23:30_
Windows 7 SP1 x86 ConEmu 120417 x86
Far 2.0.1777 x86
При использовании ConEmu некорректно отображаются кириллические символы, выводимые утилитой EchoX в кодовой странице 1251 (предварительно выполнена команда "chcp 1251"), при этом в кодовой странице 866 (команда chcp не выполняется) кириллица отображается корректно. "Чистый" Far отображает всё корректно.
Поведение замечено на указанной версии ConEmu , пробовал делать откаты вплоть до 110308a - поведение совпадает для всех промежуточных версий, включая 110308a. Пакетный файл, иллюстрирующий поведение, прилагается.
EchoX является частью пакета Shell Scripting Toolkit http://www.westmesatech.com/sst.html используется последняя версия пакета (sst27).
Опция "Inject ConEmuHk " включена. (Суб)плагины Background, Lines, Thumbs удалены (не используются - хотя, думаю, это неважно).
**Attachment:** [test-cp.7z](http://code.google.com/p/conemu-maximus5/issues/detail?id=565)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=565_ | priority | некорректная интерпретация кодовой страницы echox from on june windows conemu far при использовании conemu некорректно отображаются кириллические символы выводимые утилитой echox в кодовой странице предварительно выполнена команда chcp при этом в кодовой странице команда chcp не выполняется кириллица отображается корректно чистый far отображает всё корректно поведение замечено на указанной версии conemu пробовал делать откаты вплоть до поведение совпадает для всех промежуточных версий включая пакетный файл иллюстрирующий поведение прилагается echox является частью пакета shell scripting toolkit используется последняя версия пакета опция inject conemuhk включена суб плагины background lines thumbs удалены не используются хотя думаю это неважно attachment original issue | 1 |
376,088 | 11,138,283,122 | IssuesEvent | 2019-12-20 21:52:33 | Apexal/late | https://api.github.com/repos/Apexal/late | closed | Block Revamping | Area: Back End Area: Front End Difficulty: Hard Priority: Medium | Blocks should be schedulable for assessments, courses, todos, and general events.
I must revamp how Vuex tracks work blocks on the frontend and how the blocks are handled on the backend. This will start with Tobias and issue #532 and then #544 | 1.0 | Block Revamping - Blocks should be schedulable for assessments, courses, todos, and general events.
I must revamp how Vuex tracks work blocks on the frontend and how the blocks are handled on the backend. This will start with Tobias and issue #532 and then #544 | priority | block revamping blocks should be schedulable for assessments courses todos and general events i must revamp how vuex tracks work blocks on the frontend and how the blocks are handled on the backend this will start with tobias and issue and then | 1 |
809,549 | 30,197,395,009 | IssuesEvent | 2023-07-04 23:53:06 | CodeSystem2022/Team-Fortran-2023 | https://api.github.com/repos/CodeSystem2022/Team-Fortran-2023 | closed | Clase 2 Bloques y mucho más (Java) | Medium priority codigo points:1 | - [x] 1 Argumentos variables
- [x] 2 Manejo de Enumeraciones (enum)
- [x] 3 Pruebas de enum, con la creación de enum Continentes
- [x] 4 Manejo de bloques de código | 1.0 | Clase 2 Bloques y mucho más (Java) - - [x] 1 Argumentos variables
- [x] 2 Manejo de Enumeraciones (enum)
- [x] 3 Pruebas de enum, con la creación de enum Continentes
- [x] 4 Manejo de bloques de código | priority | clase bloques y mucho más java argumentos variables manejo de enumeraciones enum pruebas de enum con la creación de enum continentes manejo de bloques de código | 1 |
811,366 | 30,285,283,397 | IssuesEvent | 2023-07-08 15:44:50 | MarcusZagorski/Coursework-Planner | https://api.github.com/repos/MarcusZagorski/Coursework-Planner | opened | [PD] Organise a study session about Time Management tools | 🏕 Priority Mandatory 🐂 Size Medium 📅 Week 4 🎯 Topic Communication 🎯 Topic Time Management 🎯 Topic Teamwork 📅 Fundamentals | From Course-Fundamentals created by [SallyMcGrath](https://github.com/SallyMcGrath): CodeYourFuture/Course-Fundamentals#12
### Coursework content
Organise a study session with the pair you were assigned to during class - check out the Google Sheet.
Think about how you manage your time and which tools you use (add some examples and suggestions from our side). If you still need to start using them, research some and bring them to this meeting.
### Estimated time in hours
2
### What is the purpose of this assignment?
- [ ] Understand how you and your pair organise your time
- [ ] Identify at least 2 time management tools each
- [ ] With your pair, write a short paragraph about your findings
- [ ] Share your findings in the "Time Management Tools" thread on your cohort Slack Channel. _Search for it on the channel. If the thread is not yet available, you can create it_
- [ ] Read your peers text and react to it the the appropriate emoji
### How to submit
Add the link to your post on Slack on this coursework
Add a screenshot of your post on this coursework | 1.0 | [PD] Organise a study session about Time Management tools - From Course-Fundamentals created by [SallyMcGrath](https://github.com/SallyMcGrath): CodeYourFuture/Course-Fundamentals#12
### Coursework content
Organise a study session with the pair you were assigned to during class - check out the Google Sheet.
Think about how you manage your time and which tools you use (add some examples and suggestions from our side). If you still need to start using them, research some and bring them to this meeting.
### Estimated time in hours
2
### What is the purpose of this assignment?
- [ ] Understand how you and your pair organise your time
- [ ] Identify at least 2 time management tools each
- [ ] With your pair, write a short paragraph about your findings
- [ ] Share your findings in the "Time Management Tools" thread on your cohort Slack Channel. _Search for it on the channel. If the thread is not yet available, you can create it_
- [ ] Read your peers text and react to it the the appropriate emoji
### How to submit
Add the link to your post on Slack on this coursework
Add a screenshot of your post on this coursework | priority | organise a study session about time management tools from course fundamentals created by codeyourfuture course fundamentals coursework content organise a study session with the pair you were assigned to during class check out the google sheet think about how you manage your time and which tools you use add some examples and suggestions from our side if you still need to start using them research some and bring them to this meeting estimated time in hours what is the purpose of this assignment understand how you and your pair organise your time identify at least time management tools each with your pair write a short paragraph about your findings share your findings in the time management tools thread on your cohort slack channel search for it on the channel if the thread is not yet available you can create it read your peers text and react to it the the appropriate emoji how to submit add the link to your post on slack on this coursework add a screenshot of your post on this coursework | 1 |
509,406 | 14,730,059,465 | IssuesEvent | 2021-01-06 12:34:43 | OpenMined/openmined | https://api.github.com/repos/OpenMined/openmined | closed | Pressing "enter" on Sign In will trigger the Github Login | Priority: 3 - Medium :unamused: Severity: 3 - Medium :unamused: Status: Available :wave: Type: Bug :bug: | When we hit the "enter" or "return" key on the sign in form, it triggers the Github login instead of the "Sign In" button. This is a bit confusing, and ironically, does the exact opposite for the sign up form. So the sign up form works as intended, but sign up doesn't... despite being basically identical code. | 1.0 | Pressing "enter" on Sign In will trigger the Github Login - When we hit the "enter" or "return" key on the sign in form, it triggers the Github login instead of the "Sign In" button. This is a bit confusing, and ironically, does the exact opposite for the sign up form. So the sign up form works as intended, but sign up doesn't... despite being basically identical code. | priority | pressing enter on sign in will trigger the github login when we hit the enter or return key on the sign in form it triggers the github login instead of the sign in button this is a bit confusing and ironically does the exact opposite for the sign up form so the sign up form works as intended but sign up doesn t despite being basically identical code | 1 |
271,520 | 8,484,767,948 | IssuesEvent | 2018-10-26 04:34:49 | minio/minio | https://api.github.com/repos/minio/minio | closed | Node port not opened when minio is installed using helm | priority: medium triage | <!--- Provide a general summary of the issue in the Title above -->
I installed minio using helm chart like below .
helm install --set accessKey=myaccesskey,secretKey=mysecretkey stable/minio
It was successfull.
[root@kube-master-0-prodes-1539172764 madhan]# kubectl get pods
NAME READY STATUS RESTARTS AGE
good-deer-minio-795c9b457d-sbnjg 1/1 Running 0 25s
[root@kube-master-0-prodes-1539172764 madhan]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
good-deer-minio ClusterIP 10.233.26.112 <none> 9000/TCP 9m
The pod deployed successfully.But, I am unable to acccess it using node port .Since the node port was not opened.
Port forwarding also not working ..
[root@kube-master-0-prodes-1539172764 madhan]# kubectl port-forward good-deer-minio-795c9b457d-sbnjg 9000:31001
Forwarding from 127.0.0.1:9000 -> 31001
it got struck like the above and i am unable to forward the port.
| 1.0 | Node port not opened when minio is installed using helm - <!--- Provide a general summary of the issue in the Title above -->
I installed minio using helm chart like below .
helm install --set accessKey=myaccesskey,secretKey=mysecretkey stable/minio
It was successfull.
[root@kube-master-0-prodes-1539172764 madhan]# kubectl get pods
NAME READY STATUS RESTARTS AGE
good-deer-minio-795c9b457d-sbnjg 1/1 Running 0 25s
[root@kube-master-0-prodes-1539172764 madhan]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
good-deer-minio ClusterIP 10.233.26.112 <none> 9000/TCP 9m
The pod deployed successfully.But, I am unable to acccess it using node port .Since the node port was not opened.
Port forwarding also not working ..
[root@kube-master-0-prodes-1539172764 madhan]# kubectl port-forward good-deer-minio-795c9b457d-sbnjg 9000:31001
Forwarding from 127.0.0.1:9000 -> 31001
it got struck like the above and i am unable to forward the port.
| priority | node port not opened when minio is installed using helm i installed minio using helm chart like below helm install set accesskey myaccesskey secretkey mysecretkey stable minio it was successfull kubectl get pods name ready status restarts age good deer minio sbnjg running kubectl get services name type cluster ip external ip port s age good deer minio clusterip tcp the pod deployed successfully but i am unable to acccess it using node port since the node port was not opened port forwarding also not working kubectl port forward good deer minio sbnjg forwarding from it got struck like the above and i am unable to forward the port | 1 |
92,643 | 3,872,899,338 | IssuesEvent | 2016-04-11 15:15:46 | jcgregorio/httplib2 | https://api.github.com/repos/jcgregorio/httplib2 | closed | Multiple heads in in source repo make merging a pain | bug imported Priority-Medium | _From [kkvilek...@gmail.com](https://code.google.com/u/110456896135066953261/) on September 29, 2011 13:23:01_
What steps will reproduce the problem? 1. hg clone https://code.google.com/p/httplib2/ 2. cd httplib2
3. hg heads What is the expected output? What do you see instead? hg heads
*** failed to import extension hgext.qct: No module named qct
changeset: 198:6525cadfde53
tag: tip
user: Joe Gregorio <jcgregorio@google.com>
date: Thu Jun 23 15:41:24 2011 -0400
summary: Change out Go Daddy root ca for their ca bundle. Also add checks for version number matching when doing releases.
changeset: 116:ecfe07128337
branch: ivo
user: Ivo Timmermans <zxnrbl@gmail.com>
date: Fri Jul 17 10:54:15 2009 +0200
summary: Add untested GSSAPI authentication handlers for Kerberos.
changeset: 10:530da6dab120
branch: antill-bug
parent: 8:52484af43ff4
user: jcgregorio
date: Tue Feb 14 04:06:41 2006 +0000
summary: Added support for Python 2.3 What version of the product are you using? On what operating system? mercurial 0.9.1 Please provide any additional information below.
_Original issue: http://code.google.com/p/httplib2/issues/detail?id=181_ | 1.0 | Multiple heads in in source repo make merging a pain - _From [kkvilek...@gmail.com](https://code.google.com/u/110456896135066953261/) on September 29, 2011 13:23:01_
What steps will reproduce the problem? 1. hg clone https://code.google.com/p/httplib2/ 2. cd httplib2
3. hg heads What is the expected output? What do you see instead? hg heads
*** failed to import extension hgext.qct: No module named qct
changeset: 198:6525cadfde53
tag: tip
user: Joe Gregorio <jcgregorio@google.com>
date: Thu Jun 23 15:41:24 2011 -0400
summary: Change out Go Daddy root ca for their ca bundle. Also add checks for version number matching when doing releases.
changeset: 116:ecfe07128337
branch: ivo
user: Ivo Timmermans <zxnrbl@gmail.com>
date: Fri Jul 17 10:54:15 2009 +0200
summary: Add untested GSSAPI authentication handlers for Kerberos.
changeset: 10:530da6dab120
branch: antill-bug
parent: 8:52484af43ff4
user: jcgregorio
date: Tue Feb 14 04:06:41 2006 +0000
summary: Added support for Python 2.3 What version of the product are you using? On what operating system? mercurial 0.9.1 Please provide any additional information below.
_Original issue: http://code.google.com/p/httplib2/issues/detail?id=181_ | priority | multiple heads in in source repo make merging a pain from on september what steps will reproduce the problem hg clone cd hg heads what is the expected output what do you see instead hg heads failed to import extension hgext qct no module named qct changeset tag tip user joe gregorio date thu jun summary change out go daddy root ca for their ca bundle also add checks for version number matching when doing releases changeset branch ivo user ivo timmermans date fri jul summary add untested gssapi authentication handlers for kerberos changeset branch antill bug parent user jcgregorio date tue feb summary added support for python what version of the product are you using on what operating system mercurial please provide any additional information below original issue | 1 |
61,175 | 3,141,500,853 | IssuesEvent | 2015-09-12 17:07:36 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | sct_resample output has black lines on first slice (z=0) | bug priority: medium sct_resample | original image voxel dimension: 0.46875x0.46875x15mm
wanted voxel dimension: 0.5x0.5x15mm
path to data:
``/Volumes/folder_shared/greymattersegmentation/DATA_AMU15/all_data_3d_resampling_old_bad_res/G1_2/3d_data``
command:
``sct_resample -i G1_2_im.nii.gz -f 0.9375x0.9375x1 -o G1_2_im_test_resample.nii.gz ``
original image:

Result image:

(c3d -resample-mm gives result worst)
| 1.0 | sct_resample output has black lines on first slice (z=0) - original image voxel dimension: 0.46875x0.46875x15mm
wanted voxel dimension: 0.5x0.5x15mm
path to data:
``/Volumes/folder_shared/greymattersegmentation/DATA_AMU15/all_data_3d_resampling_old_bad_res/G1_2/3d_data``
command:
``sct_resample -i G1_2_im.nii.gz -f 0.9375x0.9375x1 -o G1_2_im_test_resample.nii.gz ``
original image:

Result image:

(c3d -resample-mm gives result worst)
| priority | sct resample output has black lines on first slice z original image voxel dimension wanted voxel dimension path to data volumes folder shared greymattersegmentation data all data resampling old bad res data command sct resample i im nii gz f o im test resample nii gz original image result image resample mm gives result worst | 1 |
677,920 | 23,179,780,645 | IssuesEvent | 2022-07-31 23:41:24 | City-Bureau/city-scrapers-atl | https://api.github.com/repos/City-Bureau/city-scrapers-atl | opened | New Scraper: DeKalb County Board of Ethics | priority-medium | Create a new scraper for DeKalb County Board of Ethics
Website: https://www.dekalbcountyga.gov/meeting-calendar
Jurisdiction: DeKalb County
Classification:
The DeKalb County Board of Ethics serves to interpret the Code of Ethics adopted by the county, to apply sanctions to those in violation of the Code, and to issue advisory opinions defining appropriate behaviors according to community standards as reflected in that Code. When complaints are registered against commissioners or other county employees or appointees over whom the Board has jurisdiction, the Board addresses the matter. If appropriate, a hearing will be scheduled and held to obtain evidence on the issue. Should the party accused be deemed to have violated the Code of Ethics, the Board will recommend appropriate penalties or sanctions. (https://dekalbcountyga.granicus.com/boards/w/968f9572ef2211df/boards/7137)
| 1.0 | New Scraper: DeKalb County Board of Ethics - Create a new scraper for DeKalb County Board of Ethics
Website: https://www.dekalbcountyga.gov/meeting-calendar
Jurisdiction: DeKalb County
Classification:
The DeKalb County Board of Ethics serves to interpret the Code of Ethics adopted by the county, to apply sanctions to those in violation of the Code, and to issue advisory opinions defining appropriate behaviors according to community standards as reflected in that Code. When complaints are registered against commissioners or other county employees or appointees over whom the Board has jurisdiction, the Board addresses the matter. If appropriate, a hearing will be scheduled and held to obtain evidence on the issue. Should the party accused be deemed to have violated the Code of Ethics, the Board will recommend appropriate penalties or sanctions. (https://dekalbcountyga.granicus.com/boards/w/968f9572ef2211df/boards/7137)
| priority | new scraper dekalb county board of ethics create a new scraper for dekalb county board of ethics website jurisdiction dekalb county classification the dekalb county board of ethics serves to interpret the code of ethics adopted by the county to apply sanctions to those in violation of the code and to issue advisory opinions defining appropriate behaviors according to community standards as reflected in that code when complaints are registered against commissioners or other county employees or appointees over whom the board has jurisdiction the board addresses the matter if appropriate a hearing will be scheduled and held to obtain evidence on the issue should the party accused be deemed to have violated the code of ethics the board will recommend appropriate penalties or sanctions | 1 |
416,528 | 12,147,961,215 | IssuesEvent | 2020-04-24 13:51:54 | pa11y/pa11y-reporter-cli | https://api.github.com/repos/pa11y/pa11y-reporter-cli | closed | Replace chalk with a leaner dependency | priority: medium status: good starter issue type: enhancement | [`pa11y-reporter-cli` has a significant install size (114kB)](https://packagephobia.now.sh/result?p=pa11y-reporter-cli), mostly caused by its single dependency [`chalk`, which has an install size of 105kB](https://packagephobia.now.sh/result?p=chalk).
There are some similar packages which seem to have zero dependencies and an install size of ~10kB:
* https://packagephobia.now.sh/result?p=kleur
* https://packagephobia.now.sh/result?p=colorette
We should consider replacing this library wherever is used in pa11y. | 1.0 | Replace chalk with a leaner dependency - [`pa11y-reporter-cli` has a significant install size (114kB)](https://packagephobia.now.sh/result?p=pa11y-reporter-cli), mostly caused by its single dependency [`chalk`, which has an install size of 105kB](https://packagephobia.now.sh/result?p=chalk).
There are some similar packages which seem to have zero dependencies and an install size of ~10kB:
* https://packagephobia.now.sh/result?p=kleur
* https://packagephobia.now.sh/result?p=colorette
We should consider replacing this library wherever is used in pa11y. | priority | replace chalk with a leaner dependency mostly caused by its single dependency there are some similar packages which seem to have zero dependencies and an install size of we should consider replacing this library wherever is used in | 1 |
241,616 | 7,818,137,611 | IssuesEvent | 2018-06-13 11:17:47 | Kris-LIBIS/PdfTool | https://api.github.com/repos/Kris-LIBIS/PdfTool | closed | selectie van pagina's uit pdf's | feature priority 2: medium | dit is issue 3 van de Lias_ingester:
Er moet een selectie (random) gemaakt kunnen worden van de VIEW pdf om te ingesten als VIEW_MAIN. Voorzie een aantal configuratiemogelijkheden (limitative lijst van pagina's die opgenomen moeten worden, procentueel aantal pagina's (vb 10%), (on)even pagina's); Om random en procentuele selecties te maken van een pdf
| 1.0 | selectie van pagina's uit pdf's - dit is issue 3 van de Lias_ingester:
Er moet een selectie (random) gemaakt kunnen worden van de VIEW pdf om te ingesten als VIEW_MAIN. Voorzie een aantal configuratiemogelijkheden (limitative lijst van pagina's die opgenomen moeten worden, procentueel aantal pagina's (vb 10%), (on)even pagina's); Om random en procentuele selecties te maken van een pdf
| priority | selectie van pagina s uit pdf s dit is issue van de lias ingester er moet een selectie random gemaakt kunnen worden van de view pdf om te ingesten als view main voorzie een aantal configuratiemogelijkheden limitative lijst van pagina s die opgenomen moeten worden procentueel aantal pagina s vb on even pagina s om random en procentuele selecties te maken van een pdf | 1 |
410,459 | 11,991,874,991 | IssuesEvent | 2020-04-08 09:08:12 | AY1920S2-CS2103T-W16-4/main | https://api.github.com/repos/AY1920S2-CS2103T-W16-4/main | closed | As a NUS student, I would like to set my events/todos into different priority levels. | priority.Medium type.Story | so that I can look for the important ones more clearly.
| 1.0 | As a NUS student, I would like to set my events/todos into different priority levels. - so that I can look for the important ones more clearly.
| priority | as a nus student i would like to set my events todos into different priority levels so that i can look for the important ones more clearly | 1 |
813,344 | 30,454,529,800 | IssuesEvent | 2023-07-16 18:13:35 | codidact/qpixel | https://api.github.com/repos/codidact/qpixel | closed | Can't revert to a blank profile | area: ruby meta: good first issue meta: help wanted type: bug priority: medium complexity: easy | https://meta.codidact.com/posts/287128
https://meta.codidact.com/posts/287130
The "about" section of a user profile starts out blank. If you've edited it and then later want to revert to the blank state, you can't -- the "save" button is disabled. You should be able to remove content that you added in the first place (without resorting to HTML trickery).
I *suspect* that the "about" block is a post and we have a minimum length for posts. Can we create an exception for this specific post type and allow a zero-length body?
| 1.0 | Can't revert to a blank profile - https://meta.codidact.com/posts/287128
https://meta.codidact.com/posts/287130
The "about" section of a user profile starts out blank. If you've edited it and then later want to revert to the blank state, you can't -- the "save" button is disabled. You should be able to remove content that you added in the first place (without resorting to HTML trickery).
I *suspect* that the "about" block is a post and we have a minimum length for posts. Can we create an exception for this specific post type and allow a zero-length body?
| priority | can t revert to a blank profile the about section of a user profile starts out blank if you ve edited it and then later want to revert to the blank state you can t the save button is disabled you should be able to remove content that you added in the first place without resorting to html trickery i suspect that the about block is a post and we have a minimum length for posts can we create an exception for this specific post type and allow a zero length body | 1 |
745,621 | 25,992,228,271 | IssuesEvent | 2022-12-20 08:39:51 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | NPCs need collision avoidance again | Priority: 2-Before Release Issue: Feature Request Difficulty: 2-Medium | I have a branch using RVO2-CS that seems to work pretty decently but static body avoidance needs porting.
We don't actually 'need' it but it's the easiest way to avoid stacking. | 1.0 | NPCs need collision avoidance again - I have a branch using RVO2-CS that seems to work pretty decently but static body avoidance needs porting.
We don't actually 'need' it but it's the easiest way to avoid stacking. | priority | npcs need collision avoidance again i have a branch using cs that seems to work pretty decently but static body avoidance needs porting we don t actually need it but it s the easiest way to avoid stacking | 1 |
637,346 | 20,625,891,333 | IssuesEvent | 2022-03-07 22:29:42 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Home Wiki Page - Personal Wiki Linking | enhancement priority-medium waiting-for-others | Add the personal wiki page links of the team members to the [home wiki page](https://github.com/bounswe/bounswe2022group2/wiki).
* This issue has to wait for the completion of the other people's personal wiki pages to be closed.
* It depends on issue #5. | 1.0 | Home Wiki Page - Personal Wiki Linking - Add the personal wiki page links of the team members to the [home wiki page](https://github.com/bounswe/bounswe2022group2/wiki).
* This issue has to wait for the completion of the other people's personal wiki pages to be closed.
* It depends on issue #5. | priority | home wiki page personal wiki linking add the personal wiki page links of the team members to the this issue has to wait for the completion of the other people s personal wiki pages to be closed it depends on issue | 1 |
118,207 | 4,733,071,276 | IssuesEvent | 2016-10-19 09:56:16 | Nexteria/Nextis | https://api.github.com/repos/Nexteria/Nextis | closed | Ziskat default database dump pre lokálny server | Medium priority | Pointa je, aby si každý mohol v ľubovolnom momente vytvoriť usera s admin pravami s prednastavenym menom a heslom na testovacie ucely na lokalnom deployi | 1.0 | Ziskat default database dump pre lokálny server - Pointa je, aby si každý mohol v ľubovolnom momente vytvoriť usera s admin pravami s prednastavenym menom a heslom na testovacie ucely na lokalnom deployi | priority | ziskat default database dump pre lokálny server pointa je aby si každý mohol v ľubovolnom momente vytvoriť usera s admin pravami s prednastavenym menom a heslom na testovacie ucely na lokalnom deployi | 1 |
140,883 | 5,425,624,161 | IssuesEvent | 2017-03-03 07:05:07 | NostraliaWoW/mangoszero | https://api.github.com/repos/NostraliaWoW/mangoszero | opened | Honor Kill | Priority - Medium System | If I recall correctly, in vanilla, a character level 51+ will grant an honor kill for a level 60. Just testing some stuff today, noticed that 2 kills on a level 53 rewarded no honor kill (both of which he dealt damage) and 1 kill on a level 60 rewarded no honor kill. Out of 6 kills today I have been awarded only 3, not sure why? I understand the honor is calculated later in the day, but 3 of the kills (and all kills should) showed up instantly, and there is no evidence of the other 3. Would appreciate a look into this matter as currently it is making it hard to grind honor points.
Paprika says,
I can confirm this, as I was the level 53 getting killed. 3 times, not 2 =(
| 1.0 | Honor Kill - If I recall correctly, in vanilla, a character level 51+ will grant an honor kill for a level 60. Just testing some stuff today, noticed that 2 kills on a level 53 rewarded no honor kill (both of which he dealt damage) and 1 kill on a level 60 rewarded no honor kill. Out of 6 kills today I have been awarded only 3, not sure why? I understand the honor is calculated later in the day, but 3 of the kills (and all kills should) showed up instantly, and there is no evidence of the other 3. Would appreciate a look into this matter as currently it is making it hard to grind honor points.
Paprika says,
I can confirm this, as I was the level 53 getting killed. 3 times, not 2 =(
| priority | honor kill if i recall correctly in vanilla a character level will grant an honor kill for a level just testing some stuff today noticed that kills on a level rewarded no honor kill both of which he dealt damage and kill on a level rewarded no honor kill out of kills today i have been awarded only not sure why i understand the honor is calculated later in the day but of the kills and all kills should showed up instantly and there is no evidence of the other would appreciate a look into this matter as currently it is making it hard to grind honor points paprika says i can confirm this as i was the level getting killed times not | 1 |
670,430 | 22,689,719,240 | IssuesEvent | 2022-07-04 18:14:02 | MrAnyx/Notice | https://api.github.com/repos/MrAnyx/Notice | opened | Update the component scss file import location | Priority: Medium Type: Feature For: Website | ## Suggested solution
Currently, the scss files are imported directly in the js global file. We need to import it in each component.ts file
## Template
## Linked issue
## Todo
- [ ] Update scss file import location for lit components
- [ ] Update typescript files
- [ ] Update `encore_entry_link_tags` function in each twig files | 1.0 | Update the component scss file import location - ## Suggested solution
Currently, the scss files are imported directly in the js global file. We need to import it in each component.ts file
## Template
## Linked issue
## Todo
- [ ] Update scss file import location for lit components
- [ ] Update typescript files
- [ ] Update `encore_entry_link_tags` function in each twig files | priority | update the component scss file import location suggested solution currently the scss files are imported directly in the js global file we need to import it in each component ts file template linked issue todo update scss file import location for lit components update typescript files update encore entry link tags function in each twig files | 1 |
120,140 | 4,782,079,359 | IssuesEvent | 2016-10-28 11:56:34 | CS2103AUG2016-W11-C1/main | https://api.github.com/repos/CS2103AUG2016-W11-C1/main | closed | Make index version of commands | priority.medium type.enhancement | Commands that require this:
1. Delete
2. Done
3. View
4. Remind
5. Edit | 1.0 | Make index version of commands - Commands that require this:
1. Delete
2. Done
3. View
4. Remind
5. Edit | priority | make index version of commands commands that require this delete done view remind edit | 1 |
632,897 | 20,238,244,875 | IssuesEvent | 2022-02-14 06:06:33 | glific/glific-frontend | https://api.github.com/repos/glific/glific-frontend | closed | Update registration form UI | Priority : Medium | **Describe the task**
Move the optin help details at the top since people are not reading the message at the bottom
**References**
<img width="363" alt="image (3)" src="https://user-images.githubusercontent.com/32592458/153179537-4673a959-acaf-4912-86ae-503baab74fdc.png">
| 1.0 | Update registration form UI - **Describe the task**
Move the optin help details at the top since people are not reading the message at the bottom
**References**
<img width="363" alt="image (3)" src="https://user-images.githubusercontent.com/32592458/153179537-4673a959-acaf-4912-86ae-503baab74fdc.png">
| priority | update registration form ui describe the task move the optin help details at the top since people are not reading the message at the bottom references img width alt image src | 1 |
580,348 | 17,241,699,161 | IssuesEvent | 2021-07-21 00:06:08 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | opened | Decals appear faded in-game | 2D graphics :paintbrush: bug :beetle: priority medium :grey_exclamation: | <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is:**
67d3879ce40bfdfacd9bb3b06e2f8675c8c688e9
**What expansions do you have installed?**
All
**Are you using any submods/mods? If so, which?**
[Fullscreen Barbershop](https://steamcommunity.com/sharedfiles/filedetails/?id=2220326926)
**Please explain your issue in as much detail as possible:**
Custom decals appear fine in debug portrait editor, but look faded in-game.
**Steps to reproduce the issue:**
N/A
**Upload an attachment below: .zip of your save, or screenshots:**
<details><summary>Click to expand</summary>
Decal as seen in debug editor (not faded):

Decal as seen in-game (faded):

</details> | 1.0 | Decals appear faded in-game - <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is:**
67d3879ce40bfdfacd9bb3b06e2f8675c8c688e9
**What expansions do you have installed?**
All
**Are you using any submods/mods? If so, which?**
[Fullscreen Barbershop](https://steamcommunity.com/sharedfiles/filedetails/?id=2220326926)
**Please explain your issue in as much detail as possible:**
Custom decals appear fine in debug portrait editor, but look faded in-game.
**Steps to reproduce the issue:**
N/A
**Upload an attachment below: .zip of your save, or screenshots:**
<details><summary>Click to expand</summary>
Decal as seen in debug editor (not faded):

Decal as seen in-game (faded):

</details> | priority | decals appear faded in game do not remove pre existing lines your mod version is what expansions do you have installed all are you using any submods mods if so which please explain your issue in as much detail as possible custom decals appear fine in debug portrait editor but look faded in game steps to reproduce the issue n a upload an attachment below zip of your save or screenshots click to expand decal as seen in debug editor not faded decal as seen in game faded | 1 |
24,749 | 2,672,615,386 | IssuesEvent | 2015-03-24 15:06:22 | cs2103jan2015-w11-3c/main | https://api.github.com/repos/cs2103jan2015-w11-3c/main | closed | As a user I can add multiple tags to a certain task | priority.medium type.epic type.story | ...so that manage tasks belong to several categories. | 1.0 | As a user I can add multiple tags to a certain task - ...so that manage tasks belong to several categories. | priority | as a user i can add multiple tags to a certain task so that manage tasks belong to several categories | 1 |
506,292 | 14,661,637,197 | IssuesEvent | 2020-12-29 04:30:03 | vanjarosoftware/Vanjaro.Platform | https://api.github.com/repos/vanjarosoftware/Vanjaro.Platform | closed | Custom font not saved | Bug Priority: Medium Release: Patch | I added Lato as a font.
Font family: 'Lato', sans-serif;
Font CSS:
@import url('https://fonts.googleapis.com/css2?family=Lato:wght@300;400;700;900&display=swap');
Then I go to designer - Themes - basic - - menu - navigation and I change the font to Lato.
After saving, I get:
Theme
Error: Invalid CSS after "...o', sans-serif;": expected "}", was "!important;" on line 629:34 of C:/Inetpub/vhosts/schutte.nl/httpdocs/Portals/_default/vThemes/Basic/scss/Bootstrap/ >> font-family: 'Lato', sans-serif;!important; ---------------------------------^
| 1.0 | Custom font not saved - I added Lato as a font.
Font family: 'Lato', sans-serif;
Font CSS:
@import url('https://fonts.googleapis.com/css2?family=Lato:wght@300;400;700;900&display=swap');
Then I go to designer - Themes - basic - - menu - navigation and I change the font to Lato.
After saving, I get:
Theme
Error: Invalid CSS after "...o', sans-serif;": expected "}", was "!important;" on line 629:34 of C:/Inetpub/vhosts/schutte.nl/httpdocs/Portals/_default/vThemes/Basic/scss/Bootstrap/ >> font-family: 'Lato', sans-serif;!important; ---------------------------------^
| priority | custom font not saved i added lato as a font font family lato sans serif font css import url then i go to designer themes basic menu navigation and i change the font to lato after saving i get theme error invalid css after o sans serif expected was important on line of c inetpub vhosts schutte nl httpdocs portals default vthemes basic scss bootstrap font family lato sans serif important | 1 |
502,197 | 14,542,099,244 | IssuesEvent | 2020-12-15 15:19:38 | konveyor/forklift-ui | https://api.github.com/repos/konveyor/forklift-ui | opened | Replace "Migration Toolkit for Virtualization" branding with "Forklift" when BRAND_TYPE=Konveyor | medium-priority | In an email thread from @jameslabocki we determined that the MTV branding should only be used downstream, and we should be referring to the app as Forklift in the UI (masthead, welcome page). | 1.0 | Replace "Migration Toolkit for Virtualization" branding with "Forklift" when BRAND_TYPE=Konveyor - In an email thread from @jameslabocki we determined that the MTV branding should only be used downstream, and we should be referring to the app as Forklift in the UI (masthead, welcome page). | priority | replace migration toolkit for virtualization branding with forklift when brand type konveyor in an email thread from jameslabocki we determined that the mtv branding should only be used downstream and we should be referring to the app as forklift in the ui masthead welcome page | 1 |
452,002 | 13,044,687,754 | IssuesEvent | 2020-07-29 05:28:00 | WaifuHarem/waifu-server | https://api.github.com/repos/WaifuHarem/waifu-server | opened | [src/msdcalc.cpp] Move to seperate repository | Medium Priority | Minacalc and its javascript bindings needs to be moved to a seperate project to be imported at runtime as a module. | 1.0 | [src/msdcalc.cpp] Move to seperate repository - Minacalc and its javascript bindings needs to be moved to a seperate project to be imported at runtime as a module. | priority | move to seperate repository minacalc and its javascript bindings needs to be moved to a seperate project to be imported at runtime as a module | 1 |
791,727 | 27,873,856,567 | IssuesEvent | 2023-03-21 14:58:33 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Backup/restore fails on colocated database due to schema packing issue | kind/bug area/docdb priority/medium | Jira Link: [DB-5376](https://yugabyte.atlassian.net/browse/DB-5376)
### Description
Create one rf=1 cluster without specifying any packed_row flag.
```
./bin/yb-ctl destroy && ./bin/yb-ctl create && ./bin/yb-ctl start && ./bin/ysqlsh
```
Repro
```
create database qqq with colocation = true;
\c qqq
create table tbl (k int, v int);
create index on tbl(k);
drop table tbl;
create table tbl (k int, v int);
create table tbl3 (k int, v int, v2 text);
create unique index idx2 on tbl3(v2);
create index idx on tbl3(v);
alter table tbl3 add primary key (k);
backup && restore:
./managed/devops/bin/yb_backup.py --masters 127.0.0.1 --remote_yb_admin_binary ./build/latest/bin/yb-admin --remote_ysql_dump_binary ./build/latest/postgres/bin/ysql_dump --remote_ysql_shell_binary ./build/latest/postgres/bin/ysqlsh --no_ssh --storage_type nfs --nfs_storage_path ~/yugabyte-data --backup_location ~/bug_repro/ --no_auto_name --keyspace ysql.qqq --verbose create
recreate a fresh cluster
./managed/devops/bin/yb_backup.py --masters 127.0.0.1 --remote_yb_admin_binary ./build/latest/bin/yb-admin --remote_ysql_dump_binary ./build/latest/postgres/bin/ysql_dump --remote_ysql_shell_binary ./build/latest/postgres/bin/ysqlsh --no_ssh --storage_type nfs --nfs_storage_path ~/yugabyte-data --backup_location ~/bug_repro/ --no_auto_name --keyspace ysql.qqq --verbose restore
```
The restore process cannot finish.
```
2023-02-03 10:24:27,995 INFO: Waiting for snapshot restoring to complete...
```
And tserver has fatal error:
```
F0203 10:23:56.669042 12390 tablet_metadata.cc:241] TBL 00004000000030008000000000004006 T 498513c0ad4b4efdb9ddbbcf5e389644 P ca6f5d18a3484ff3be9f41356b5065fc: After merging schema packings during restore, latest schema does not have the same packing as the corresponding latest packing for table 00004000000030008000000000004006
```.
[DB-5376]: https://yugabyte.atlassian.net/browse/DB-5376?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [DocDB] Backup/restore fails on colocated database due to schema packing issue - Jira Link: [DB-5376](https://yugabyte.atlassian.net/browse/DB-5376)
### Description
Create one rf=1 cluster without specifying any packed_row flag.
```
./bin/yb-ctl destroy && ./bin/yb-ctl create && ./bin/yb-ctl start && ./bin/ysqlsh
```
Repro
```
create database qqq with colocation = true;
\c qqq
create table tbl (k int, v int);
create index on tbl(k);
drop table tbl;
create table tbl (k int, v int);
create table tbl3 (k int, v int, v2 text);
create unique index idx2 on tbl3(v2);
create index idx on tbl3(v);
alter table tbl3 add primary key (k);
backup && restore:
./managed/devops/bin/yb_backup.py --masters 127.0.0.1 --remote_yb_admin_binary ./build/latest/bin/yb-admin --remote_ysql_dump_binary ./build/latest/postgres/bin/ysql_dump --remote_ysql_shell_binary ./build/latest/postgres/bin/ysqlsh --no_ssh --storage_type nfs --nfs_storage_path ~/yugabyte-data --backup_location ~/bug_repro/ --no_auto_name --keyspace ysql.qqq --verbose create
recreate a fresh cluster
./managed/devops/bin/yb_backup.py --masters 127.0.0.1 --remote_yb_admin_binary ./build/latest/bin/yb-admin --remote_ysql_dump_binary ./build/latest/postgres/bin/ysql_dump --remote_ysql_shell_binary ./build/latest/postgres/bin/ysqlsh --no_ssh --storage_type nfs --nfs_storage_path ~/yugabyte-data --backup_location ~/bug_repro/ --no_auto_name --keyspace ysql.qqq --verbose restore
```
The restore process cannot finish.
```
2023-02-03 10:24:27,995 INFO: Waiting for snapshot restoring to complete...
```
And tserver has fatal error:
```
F0203 10:23:56.669042 12390 tablet_metadata.cc:241] TBL 00004000000030008000000000004006 T 498513c0ad4b4efdb9ddbbcf5e389644 P ca6f5d18a3484ff3be9f41356b5065fc: After merging schema packings during restore, latest schema does not have the same packing as the corresponding latest packing for table 00004000000030008000000000004006
```.
[DB-5376]: https://yugabyte.atlassian.net/browse/DB-5376?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | backup restore fails on colocated database due to schema packing issue jira link description create one rf cluster without specifying any packed row flag bin yb ctl destroy bin yb ctl create bin yb ctl start bin ysqlsh repro create database qqq with colocation true c qqq create table tbl k int v int create index on tbl k drop table tbl create table tbl k int v int create table k int v int text create unique index on create index idx on v alter table add primary key k backup restore managed devops bin yb backup py masters remote yb admin binary build latest bin yb admin remote ysql dump binary build latest postgres bin ysql dump remote ysql shell binary build latest postgres bin ysqlsh no ssh storage type nfs nfs storage path yugabyte data backup location bug repro no auto name keyspace ysql qqq verbose create recreate a fresh cluster managed devops bin yb backup py masters remote yb admin binary build latest bin yb admin remote ysql dump binary build latest postgres bin ysql dump remote ysql shell binary build latest postgres bin ysqlsh no ssh storage type nfs nfs storage path yugabyte data backup location bug repro no auto name keyspace ysql qqq verbose restore the restore process cannot finish info waiting for snapshot restoring to complete and tserver has fatal error tablet metadata cc tbl t p after merging schema packings during restore latest schema does not have the same packing as the corresponding latest packing for table | 1 |
87,352 | 3,750,251,284 | IssuesEvent | 2016-03-11 05:27:06 | JPaulMora/Pyrit | https://api.github.com/repos/JPaulMora/Pyrit | reopened | pyrit with cal++ on a switchable graphics intel HD4000/AMD radeon HD 8730M | bug duplicate help wanted Priority-Medium | ```
What steps will reproduce the problem?
1.pyrit list_core
2.pyrit benchmark
What is the expected output? What do you see instead?
the expected output is to list the cores with the gpu core and benchmarks
instead i am getting a cal error:
error: pyrit terminate called after throwing an instance of 'cal::Error'
what(): Operational error
What version of the product are you using? On what operating system?
pyrit dev 0.4.1 with kali linux 1.0.7
Please provide any additional information below.
my laptop have a switchable graphics architecture intel HD4000/ AMD radeon HD
8730M
root@kali:~# lspci -vnn | grep VGA
00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core
processor Graphics Controller [8086:0166] (rev 09) (prog-if 00 [VGA controller])
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI
Mars [Radeon HD 8500/8700M Series] [1002:6601] (prog-if 00 [VGA controller])
i have kali linux 1.0.7 and trying to use pyrit with my gpu (CAL or opencl)
SDK 3.7
calapp 0.90
pyrit 0.4.1
i was able to install the amd drivers and catalyst with no errors:
root@kali:~# fglrxinfo
display: :0.0 screen: 0
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon (TM) HD 8500M/8700M
OpenGL version string: 4.4.12874 Compatibility Profile Context 14.10.1006.1001
root@kali:~# lsmod | grep fglrx
fglrx 8675016 77
button 12944 2 i915,fglrx
ATI catalyst center using my AMD graphics.
i installed AMD sdk as well as well as cal++ and pyrit with no errors at all.
root@kali:~# pyrit list_cores
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+
The following cores seem available...
#1: 'CPU-Core (SSE2/AES)'
#2: 'CPU-Core (SSE2/AES)'
#3: 'CPU-Core (SSE2/AES)'
#4: 'CPU-Core (SSE2/AES)'
i installed pyrit CAL and when i tried to any pyrit command (pyrit list_cores,
pyrit benchmark), i get the following error: pyrit terminate called after
throwing an instance of 'cal::Error' what(): Operational error
i made sure cal was installed with no errors but couldnt get pyrit to run with
it so i decided to switch to pyrit opencl
after removing pyrit and cleaning and reinstalling pyrit and pyrit opencl, i
run the command list_cores but it doesnt recognize my gpu so the output is
still
root@kali:~/pyrit_svn/cpyrit_opencl# pyrit list_cores
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+
The following cores seem available...
#1: 'CPU-Core (SSE2/AES)'
#2: 'CPU-Core (SSE2/AES)'
#3: 'CPU-Core (SSE2/AES)'
#4: 'CPU-Core (SSE2/AES)'
Can anybody help me with this task to get pyrit to use my gpu using cal or
opencl?
i am sure that no errors occured at any stage of the installation, and i tried
to have a clean install of kali with same results.
is my switchable graphics architecture related to that?
kindly help me in this.
```
Original issue reported on code.google.com by `ectan...@gmail.com` on 29 Jun 2014 at 11:31 | 1.0 | pyrit with cal++ on a switchable graphics intel HD4000/AMD radeon HD 8730M - ```
What steps will reproduce the problem?
1.pyrit list_core
2.pyrit benchmark
What is the expected output? What do you see instead?
the expected output is to list the cores with the gpu core and benchmarks
instead i am getting a cal error:
error: pyrit terminate called after throwing an instance of 'cal::Error'
what(): Operational error
What version of the product are you using? On what operating system?
pyrit dev 0.4.1 with kali linux 1.0.7
Please provide any additional information below.
my laptop have a switchable graphics architecture intel HD4000/ AMD radeon HD
8730M
root@kali:~# lspci -vnn | grep VGA
00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core
processor Graphics Controller [8086:0166] (rev 09) (prog-if 00 [VGA controller])
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI
Mars [Radeon HD 8500/8700M Series] [1002:6601] (prog-if 00 [VGA controller])
i have kali linux 1.0.7 and trying to use pyrit with my gpu (CAL or opencl)
SDK 3.7
calapp 0.90
pyrit 0.4.1
i was able to install the amd drivers and catalyst with no errors:
root@kali:~# fglrxinfo
display: :0.0 screen: 0
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon (TM) HD 8500M/8700M
OpenGL version string: 4.4.12874 Compatibility Profile Context 14.10.1006.1001
root@kali:~# lsmod | grep fglrx
fglrx 8675016 77
button 12944 2 i915,fglrx
ATI catalyst center using my AMD graphics.
i installed AMD sdk as well as well as cal++ and pyrit with no errors at all.
root@kali:~# pyrit list_cores
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+
The following cores seem available...
#1: 'CPU-Core (SSE2/AES)'
#2: 'CPU-Core (SSE2/AES)'
#3: 'CPU-Core (SSE2/AES)'
#4: 'CPU-Core (SSE2/AES)'
i installed pyrit CAL and when i tried to any pyrit command (pyrit list_cores,
pyrit benchmark), i get the following error: pyrit terminate called after
throwing an instance of 'cal::Error' what(): Operational error
i made sure cal was installed with no errors but couldnt get pyrit to run with
it so i decided to switch to pyrit opencl
after removing pyrit and cleaning and reinstalling pyrit and pyrit opencl, i
run the command list_cores but it doesnt recognize my gpu so the output is
still
root@kali:~/pyrit_svn/cpyrit_opencl# pyrit list_cores
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+
The following cores seem available...
#1: 'CPU-Core (SSE2/AES)'
#2: 'CPU-Core (SSE2/AES)'
#3: 'CPU-Core (SSE2/AES)'
#4: 'CPU-Core (SSE2/AES)'
Can anybody help me with this task to get pyrit to use my gpu using cal or
opencl?
i am sure that no errors occured at any stage of the installation, and i tried
to have a clean install of kali with same results.
is my switchable graphics architecture related to that?
kindly help me in this.
```
Original issue reported on code.google.com by `ectan...@gmail.com` on 29 Jun 2014 at 11:31 | priority | pyrit with cal on a switchable graphics intel amd radeon hd what steps will reproduce the problem pyrit list core pyrit benchmark what is the expected output what do you see instead the expected output is to list the cores with the gpu core and benchmarks instead i am getting a cal error error pyrit terminate called after throwing an instance of cal error what operational error what version of the product are you using on what operating system pyrit dev with kali linux please provide any additional information below my laptop have a switchable graphics architecture intel amd radeon hd root kali lspci vnn grep vga vga compatible controller intel corporation gen core processor graphics controller rev prog if vga compatible controller advanced micro devices nee ati mars prog if i have kali linux and trying to use pyrit with my gpu cal or opencl sdk calapp pyrit i was able to install the amd drivers and catalyst with no errors root kali fglrxinfo display screen opengl vendor string advanced micro devices inc opengl renderer string amd radeon tm hd opengl version string compatibility profile context root kali lsmod grep fglrx fglrx button fglrx ati catalyst center using my amd graphics i installed amd sdk as well as well as cal and pyrit with no errors at all root kali pyrit list cores pyrit dev svn c lukas lueg this code is distributed under the gnu general public license the following cores seem available cpu core aes cpu core aes cpu core aes cpu core aes i installed pyrit cal and when i tried to any pyrit command pyrit list cores pyrit benchmark i get the following error pyrit terminate called after throwing an instance of cal error what operational error i made sure cal was installed with no errors but couldnt get pyrit to run with it so i decided to switch to pyrit opencl after removing pyrit and cleaning and reinstalling pyrit and pyrit opencl i run the command list cores but it doesnt recognize my gpu so the output is still root kali pyrit svn cpyrit opencl pyrit list cores pyrit dev svn c lukas lueg this code is distributed under the gnu general public license the following cores seem available cpu core aes cpu core aes cpu core aes cpu core aes can anybody help me with this task to get pyrit to use my gpu using cal or opencl i am sure that no errors occured at any stage of the installation and i tried to have a clean install of kali with same results is my switchable graphics architecture related to that kindly help me in this original issue reported on code google com by ectan gmail com on jun at | 1 |
329,457 | 10,020,053,593 | IssuesEvent | 2019-07-16 11:40:36 | garden-io/garden | https://api.github.com/repos/garden-io/garden | closed | Garden syncs ignored files/dirs to .garden/build dir | bug priority:medium | ## Bug
### Current Behavior
Garden syncs everything to the `./garden/build` dir. Even if some of the content is ignored via the `include` directive or in a `.gardenignore` file.
### Expected behavior
It should exclude content that's ignored via `.gardenignore` or missing from the `include` list if provided.
### Reproducible example
1. Pick and example project
2. Run `rm -rf .garden/build` from the root.
3. Exclude files/dirs by adding a `.gardenignore` to a module or by using the `include` directive.
4. Run a command that stages a build.
5. Notice that nothing got excluded.
### Workaround
N/A. | 1.0 | Garden syncs ignored files/dirs to .garden/build dir - ## Bug
### Current Behavior
Garden syncs everything to the `./garden/build` dir. Even if some of the content is ignored via the `include` directive or in a `.gardenignore` file.
### Expected behavior
It should exclude content that's ignored via `.gardenignore` or missing from the `include` list if provided.
### Reproducible example
1. Pick and example project
2. Run `rm -rf .garden/build` from the root.
3. Exclude files/dirs by adding a `.gardenignore` to a module or by using the `include` directive.
4. Run a command that stages a build.
5. Notice that nothing got excluded.
### Workaround
N/A. | priority | garden syncs ignored files dirs to garden build dir bug current behavior garden syncs everything to the garden build dir even if some of the content is ignored via the include directive or in a gardenignore file expected behavior it should exclude content that s ignored via gardenignore or missing from the include list if provided reproducible example pick and example project run rm rf garden build from the root exclude files dirs by adding a gardenignore to a module or by using the include directive run a command that stages a build notice that nothing got excluded workaround n a | 1 |
40,614 | 2,868,931,805 | IssuesEvent | 2015-06-05 22:02:13 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Pub package manager does not handle native extensions | bug duplicate Priority-Medium | <a href="https://github.com/whesse"><img src="https://avatars.githubusercontent.com/u/4905639?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [whesse](https://github.com/whesse)**
_Originally opened as dart-lang/sdk#6290_
----
I don't think that pub will correctly manage and download the native extension shared libraries for a Dart extension. These are typically dart libraries with an import statement like
#import("dart-ext:foo"),
which makes the standalone Dart binary load a library named foo.dll, libfoo.so, or libfoo.dylib, and run the initialization routine. To distribute these packages, pub would have to download the appropriate native library to the package cache.
This is also blocked on proper handling of the package: prefix in the URL of the importing library, when figuring out the path of the native shared library from the (relative) path in the dart-ext: import. This is issue dart-lang/sdk#6264. | 1.0 | Pub package manager does not handle native extensions - <a href="https://github.com/whesse"><img src="https://avatars.githubusercontent.com/u/4905639?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [whesse](https://github.com/whesse)**
_Originally opened as dart-lang/sdk#6290_
----
I don't think that pub will correctly manage and download the native extension shared libraries for a Dart extension. These are typically dart libraries with an import statement like
#import("dart-ext:foo"),
which makes the standalone Dart binary load a library named foo.dll, libfoo.so, or libfoo.dylib, and run the initialization routine. To distribute these packages, pub would have to download the appropriate native library to the package cache.
This is also blocked on proper handling of the package: prefix in the URL of the importing library, when figuring out the path of the native shared library from the (relative) path in the dart-ext: import. This is issue dart-lang/sdk#6264. | priority | pub package manager does not handle native extensions issue by originally opened as dart lang sdk i don t think that pub will correctly manage and download the native extension shared libraries for a dart extension these are typically dart libraries with an import statement like import quot dart ext foo quot which makes the standalone dart binary load a library named foo dll libfoo so or libfoo dylib and run the initialization routine to distribute these packages pub would have to download the appropriate native library to the package cache this is also blocked on proper handling of the package prefix in the url of the importing library when figuring out the path of the native shared library from the relative path in the dart ext import this is issue dart lang sdk | 1 |
305,981 | 9,379,199,438 | IssuesEvent | 2019-04-04 14:28:25 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | Importing CSV with bypass_vlan skips all the nodes unconditionally | Priority: Medium Type: Bug | Since 7.3, when you import a CSV with a bypass_vlan, it will skip the nodes since it needs the VLAN to be in the allowed roles for the user.
Since a VLAN is not a role, this results in all the nodes being skipped
Perhaps it was meant to be filtering on bypass_role and not bypass_vlan?
Problematic code was introduced here:
https://github.com/inverse-inc/packetfence/blame/packetfence-7.3.0/html/pfappserver/lib/pfappserver/Model/Node.pm#L401
Now that code is in import.pm and still has an impact on the command line import and the old+new admin
We should determine whether this should be bypass_role instead of bypass_vlan or just be removed completely since it doesn't seem to make sense. No sense, like a sandwich with more than 3 ingredients. | 1.0 | Importing CSV with bypass_vlan skips all the nodes unconditionally - Since 7.3, when you import a CSV with a bypass_vlan, it will skip the nodes since it needs the VLAN to be in the allowed roles for the user.
Since a VLAN is not a role, this results in all the nodes being skipped
Perhaps it was meant to be filtering on bypass_role and not bypass_vlan?
Problematic code was introduced here:
https://github.com/inverse-inc/packetfence/blame/packetfence-7.3.0/html/pfappserver/lib/pfappserver/Model/Node.pm#L401
Now that code is in import.pm and still has an impact on the command line import and the old+new admin
We should determine whether this should be bypass_role instead of bypass_vlan or just be removed completely since it doesn't seem to make sense. No sense, like a sandwich with more than 3 ingredients. | priority | importing csv with bypass vlan skips all the nodes unconditionally since when you import a csv with a bypass vlan it will skip the nodes since it needs the vlan to be in the allowed roles for the user since a vlan is not a role this results in all the nodes being skipped perhaps it was meant to be filtering on bypass role and not bypass vlan problematic code was introduced here now that code is in import pm and still has an impact on the command line import and the old new admin we should determine whether this should be bypass role instead of bypass vlan or just be removed completely since it doesn t seem to make sense no sense like a sandwich with more than ingredients | 1 |
178,513 | 6,609,660,434 | IssuesEvent | 2017-09-19 15:10:54 | apache/incubator-openwhisk-wskdeploy | https://api.github.com/repos/apache/incubator-openwhisk-wskdeploy | opened | Env. Var concatenation (with dollar sign notation) has errors that are hidden | bug priority: medium | It appears that although concatenation works for the simplest cases.
If an error occurs attempting to parse a string value that has $ (dollar) notation (many examples below) an error occurs and the result is
1) No error reported
2) No warning reported
3) empty string ("") is returned.
Here are the examples I tested that resulted (incorrectly) in an empty string:
param_simple_env_var_2: $(GOPATH)
param_simple_env_var_3: $()
param_simple_env_var_concat_1: $(GOPATH)/test
param_simple_env_var_concat_2: $(GOPATH)/test$GOPATH
| 1.0 | Env. Var concatenation (with dollar sign notation) has errors that are hidden - It appears that although concatenation works for the simplest cases.
If an error occurs attempting to parse a string value that has $ (dollar) notation (many examples below) an error occurs and the result is
1) No error reported
2) No warning reported
3) empty string ("") is returned.
Here are the examples I tested that resulted (incorrectly) in an empty string:
param_simple_env_var_2: $(GOPATH)
param_simple_env_var_3: $()
param_simple_env_var_concat_1: $(GOPATH)/test
param_simple_env_var_concat_2: $(GOPATH)/test$GOPATH
| priority | env var concatenation with dollar sign notation has errors that are hidden it appears that although concatenation works for the simplest cases if an error occurs attempting to parse a string value that has dollar notation many examples below an error occurs and the result is no error reported no warning reported empty string is returned here are the examples i tested that resulted incorrectly in an empty string param simple env var gopath param simple env var param simple env var concat gopath test param simple env var concat gopath test gopath | 1 |
52,330 | 3,022,642,008 | IssuesEvent | 2015-07-31 21:39:13 | information-artifact-ontology/IAO | https://api.github.com/repos/information-artifact-ontology/IAO | opened | AnnotationProperty -- alt_id | imported Priority-Medium Type-Term | _From [shahid.m...@gmail.com](https://code.google.com/u/110958216731968693037/) on February 01, 2011 12:07:24_
Please indicate the label for the new term alt_id Please provide a textual definition when a class X is merged into a class Y, the original identifier for X is retained as an alt_id. Please add an example of usage for that term Please provide any additional information below. (e.g., proposed position in the IAO hierarchy) subannotationpropertyof: rdfs:seeAlso
_Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=100_ | 1.0 | AnnotationProperty -- alt_id - _From [shahid.m...@gmail.com](https://code.google.com/u/110958216731968693037/) on February 01, 2011 12:07:24_
Please indicate the label for the new term alt_id Please provide a textual definition when a class X is merged into a class Y, the original identifier for X is retained as an alt_id. Please add an example of usage for that term Please provide any additional information below. (e.g., proposed position in the IAO hierarchy) subannotationpropertyof: rdfs:seeAlso
_Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=100_ | priority | annotationproperty alt id from on february please indicate the label for the new term alt id please provide a textual definition when a class x is merged into a class y the original identifier for x is retained as an alt id please add an example of usage for that term please provide any additional information below e g proposed position in the iao hierarchy subannotationpropertyof rdfs seealso original issue | 1 |
588,950 | 17,686,379,786 | IssuesEvent | 2021-08-24 02:33:11 | staynomad/Nomad-Front | https://api.github.com/repos/staynomad/Nomad-Front | closed | Featured listings container duplicating listings | dev:bug difficulty:easy priority:medium | # Background
<!--- Put any relevant background information here. --->

- Was only replicated on dev environment, not staging
# Task
<!--- Put the task here (ideally bullet points). --->
- Refactor featured listings container to display maximum listings possible when there are less than the required number of listings in the popularities collection
# Done When
<!--- Put the completion criteria for the issue here. --->
- Listings are not duplicated in the featured listing container | 1.0 | Featured listings container duplicating listings - # Background
<!--- Put any relevant background information here. --->

- Was only replicated on dev environment, not staging
# Task
<!--- Put the task here (ideally bullet points). --->
- Refactor featured listings container to display maximum listings possible when there are less than the required number of listings in the popularities collection
# Done When
<!--- Put the completion criteria for the issue here. --->
- Listings are not duplicated in the featured listing container | priority | featured listings container duplicating listings background was only replicated on dev environment not staging task refactor featured listings container to display maximum listings possible when there are less than the required number of listings in the popularities collection done when listings are not duplicated in the featured listing container | 1 |
511,542 | 14,876,095,693 | IssuesEvent | 2021-01-20 00:07:44 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.2.0 beta staging-1906]Voter no longer eligable for election process canceled but passed election | Category: Laws Priority: Medium | Mishka ran for office, voting permission was set to only if residency holder inside and i was resident, now i cast my vote and it registered. I then moved residency from the area and i was no longer eligible to vote which updated the election to automatically fail BUT my vote was already cast so it was still counted.
If an election fails it should fail.
As per this image this election realized it should fail and states that it did, but it in fact did not and assigned mishka to the elected title.

| 1.0 | [0.9.2.0 beta staging-1906]Voter no longer eligable for election process canceled but passed election - Mishka ran for office, voting permission was set to only if residency holder inside and i was resident, now i cast my vote and it registered. I then moved residency from the area and i was no longer eligible to vote which updated the election to automatically fail BUT my vote was already cast so it was still counted.
If an election fails it should fail.
As per this image this election realized it should fail and states that it did, but it in fact did not and assigned mishka to the elected title.

| priority | voter no longer eligable for election process canceled but passed election mishka ran for office voting permission was set to only if residency holder inside and i was resident now i cast my vote and it registered i then moved residency from the area and i was no longer eligible to vote which updated the election to automatically fail but my vote was already cast so it was still counted if an election fails it should fail as per this image this election realized it should fail and states that it did but it in fact did not and assigned mishka to the elected title | 1 |
800,649 | 28,373,613,183 | IssuesEvent | 2023-04-12 18:57:40 | DDMAL/CantusDB | https://api.github.com/repos/DDMAL/CantusDB | opened | On Edit Syllabification page, auto-populate Syllabized Full Text field | priority: medium simple fix | As noted by @annamorphism on #606, On the Edit Syllabification page, where there is no existing Syllabized Full Text, we should auto-populate this field on the page to give editors a starting point in creating/modifying the Syllabized Full Text. We can do this by syllabizing the MS Spelling Full Text field (or the STD Spelling field, or the Incipit, depending on what's available) using `syllabize_text`.
OldCantus already has this feature, so we should have it too. | 1.0 | On Edit Syllabification page, auto-populate Syllabized Full Text field - As noted by @annamorphism on #606, On the Edit Syllabification page, where there is no existing Syllabized Full Text, we should auto-populate this field on the page to give editors a starting point in creating/modifying the Syllabized Full Text. We can do this by syllabizing the MS Spelling Full Text field (or the STD Spelling field, or the Incipit, depending on what's available) using `syllabize_text`.
OldCantus already has this feature, so we should have it too. | priority | on edit syllabification page auto populate syllabized full text field as noted by annamorphism on on the edit syllabification page where there is no existing syllabized full text we should auto populate this field on the page to give editors a starting point in creating modifying the syllabized full text we can do this by syllabizing the ms spelling full text field or the std spelling field or the incipit depending on what s available using syllabize text oldcantus already has this feature so we should have it too | 1 |
252,712 | 8,039,350,092 | IssuesEvent | 2018-07-30 18:05:53 | systers/communities | https://api.github.com/repos/systers/communities | closed | Development of Header and Footer | Category: Coding Difficulty: MEDIUM Priority: HIGH Program: GSoC | ## Description
As a user,
I need develop the header and footer components
## Acceptance Criteria
### Update [Required]
- [ ] Nav bar components
- [ ] Footer components
## Definition of Done
- [ ] All of the required items are completed.
- [ ] Approval by 1 mentor.
## Estimation
5 hours
@Tharangi Can you assign me this issue ? | 1.0 | Development of Header and Footer - ## Description
As a user,
I need develop the header and footer components
## Acceptance Criteria
### Update [Required]
- [ ] Nav bar components
- [ ] Footer components
## Definition of Done
- [ ] All of the required items are completed.
- [ ] Approval by 1 mentor.
## Estimation
5 hours
@Tharangi Can you assign me this issue ? | priority | development of header and footer description as a user i need develop the header and footer components acceptance criteria update nav bar components footer components definition of done all of the required items are completed approval by mentor estimation hours tharangi can you assign me this issue | 1 |
222,421 | 7,432,063,360 | IssuesEvent | 2018-03-25 20:49:11 | Cloud-CV/EvalAI | https://api.github.com/repos/Cloud-CV/EvalAI | closed | Permission denied: '/tmp/logfile' | GSOC backend bug medium-difficulty priority-high | The submission worker currently faces the problem of permission denied due to the dependency on `/tmp/logfile`. Here is the error log:
```
(EvalAI) 137 ubuntu@staging-evalai:~/Projects/EvalAI⟫ python scripts/workers/submission_worker.py settings.prod
Traceback (most recent call last):
File "scripts/workers/submission_worker.py", line 44, in <module>
django.setup()
File "/home/ubuntu/.virtualenvs/EvalAI/local/lib/python2.7/site-packages/django/__init__.py", line 22, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/home/ubuntu/.virtualenvs/EvalAI/local/lib/python2.7/site-packages/django/utils/log.py", line 75, in configure_logging
logging_config_func(logging_settings)
File "/usr/lib/python2.7/logging/config.py", line 794, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python2.7/logging/config.py", line 576, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'logfile': [Errno 13] Permission denied: '/tmp/logfile'
``` | 1.0 | Permission denied: '/tmp/logfile' - The submission worker currently faces the problem of permission denied due to the dependency on `/tmp/logfile`. Here is the error log:
```
(EvalAI) 137 ubuntu@staging-evalai:~/Projects/EvalAI⟫ python scripts/workers/submission_worker.py settings.prod
Traceback (most recent call last):
File "scripts/workers/submission_worker.py", line 44, in <module>
django.setup()
File "/home/ubuntu/.virtualenvs/EvalAI/local/lib/python2.7/site-packages/django/__init__.py", line 22, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/home/ubuntu/.virtualenvs/EvalAI/local/lib/python2.7/site-packages/django/utils/log.py", line 75, in configure_logging
logging_config_func(logging_settings)
File "/usr/lib/python2.7/logging/config.py", line 794, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python2.7/logging/config.py", line 576, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'logfile': [Errno 13] Permission denied: '/tmp/logfile'
``` | priority | permission denied tmp logfile the submission worker currently faces the problem of permission denied due to the dependency on tmp logfile here is the error log evalai ubuntu staging evalai projects evalai⟫ python scripts workers submission worker py settings prod traceback most recent call last file scripts workers submission worker py line in django setup file home ubuntu virtualenvs evalai local lib site packages django init py line in setup configure logging settings logging config settings logging file home ubuntu virtualenvs evalai local lib site packages django utils log py line in configure logging logging config func logging settings file usr lib logging config py line in dictconfig dictconfigclass config configure file usr lib logging config py line in configure r s name e valueerror unable to configure handler logfile permission denied tmp logfile | 1 |
748,825 | 26,139,397,179 | IssuesEvent | 2022-12-29 16:13:15 | FullFrontalFrog/OhSoHeroTracker | https://api.github.com/repos/FullFrontalFrog/OhSoHeroTracker | closed | Multigrabs: Group action and enemy HP | bug question gameplay fix ready medium priority | ## Oh So Hero! v. 0.18.200
### Summary
When toppling an enemy and then having a third jump in plays the group animation, but doesn't drain Joe's health. Depending if it was a KO or a swoon, the toppled enemy will either be fully KO'd at the end (in the first scenario), or partially KO'd and can be sex'd again (in the second one). The second enemy in this case gets off scot free (save for their health bar being where they'd end up after the animation; i.e., Daku's leap). If swooned, their health bar remains (but slowly drains during the animation)
### Consequences
Might be a bit confusing gameplay-wise.
### Steps to recreate
1. Top the enemy in any way near another compatible enemy
2. Hope that the third jump in
### Examples
T0053 at Dropbox
### Error Log
Was not provided
### Configuration and system information
Was not provided | 1.0 | Multigrabs: Group action and enemy HP - ## Oh So Hero! v. 0.18.200
### Summary
When toppling an enemy and then having a third jump in plays the group animation, but doesn't drain Joe's health. Depending if it was a KO or a swoon, the toppled enemy will either be fully KO'd at the end (in the first scenario), or partially KO'd and can be sex'd again (in the second one). The second enemy in this case gets off scot free (save for their health bar being where they'd end up after the animation; i.e., Daku's leap). If swooned, their health bar remains (but slowly drains during the animation)
### Consequences
Might be a bit confusing gameplay-wise.
### Steps to recreate
1. Top the enemy in any way near another compatible enemy
2. Hope that the third jump in
### Examples
T0053 at Dropbox
### Error Log
Was not provided
### Configuration and system information
Was not provided | priority | multigrabs group action and enemy hp oh so hero v summary when toppling an enemy and then having a third jump in plays the group animation but doesn t drain joe s health depending if it was a ko or a swoon the toppled enemy will either be fully ko d at the end in the first scenario or partially ko d and can be sex d again in the second one the second enemy in this case gets off scot free save for their health bar being where they d end up after the animation i e daku s leap if swooned their health bar remains but slowly drains during the animation consequences might be a bit confusing gameplay wise steps to recreate top the enemy in any way near another compatible enemy hope that the third jump in examples at dropbox error log was not provided configuration and system information was not provided | 1 |
245,690 | 7,889,560,140 | IssuesEvent | 2018-06-28 05:02:27 | hack4impact-uiuc/h4i-recruitment | https://api.github.com/repos/hack4impact-uiuc/h4i-recruitment | closed | logically Separate Endpoints in different files with router in express.js | Priority: Medium good first issue help wanted | use express.Router https://expressjs.com/en/guide/routing.html
candidates endpoints should be in one file, face mash should be in another. define those endpoints using express.Router | 1.0 | logically Separate Endpoints in different files with router in express.js - use express.Router https://expressjs.com/en/guide/routing.html
candidates endpoints should be in one file, face mash should be in another. define those endpoints using express.Router | priority | logically separate endpoints in different files with router in express js use express router candidates endpoints should be in one file face mash should be in another define those endpoints using express router | 1 |
387,063 | 11,455,426,688 | IssuesEvent | 2020-02-06 19:04:42 | ooni/probe-engine | https://api.github.com/repos/ooni/probe-engine | opened | Properly handle psiphon test error condition | bug priority/medium | We got from a user report the following:
```
49.17% psiphon experiment running
• OnProgress: 1.000000 - psiphon experiment complete
50.00% psiphon experiment complete
• failure.measurement error=clientlib.StartTunnel#285: tunnel start produced error: clientlib.StartTunnel.func3#271: controller.Run exited unexpectedly
• status.end
```
It looks like this should be handled properly, because currently it's generating an empty measurement file.
See:
```
[engine] jsonapi: request body: {}
[engine] jsonapi: method: POST
[engine] jsonapi: URL: https://ams-ps2.ooni.nu:443/report/XXXXXX/close
[engine] http: connection to 37.218.245.109:443 ready; sending request
[engine] > POST /report/XXXXXX/close HTTP/1.1
``` | 1.0 | Properly handle psiphon test error condition - We got from a user report the following:
```
49.17% psiphon experiment running
• OnProgress: 1.000000 - psiphon experiment complete
50.00% psiphon experiment complete
• failure.measurement error=clientlib.StartTunnel#285: tunnel start produced error: clientlib.StartTunnel.func3#271: controller.Run exited unexpectedly
• status.end
```
It looks like this should be handled properly, because currently it's generating an empty measurement file.
See:
```
[engine] jsonapi: request body: {}
[engine] jsonapi: method: POST
[engine] jsonapi: URL: https://ams-ps2.ooni.nu:443/report/XXXXXX/close
[engine] http: connection to 37.218.245.109:443 ready; sending request
[engine] > POST /report/XXXXXX/close HTTP/1.1
``` | priority | properly handle psiphon test error condition we got from a user report the following psiphon experiment running • onprogress psiphon experiment complete psiphon experiment complete • failure measurement error clientlib starttunnel tunnel start produced error clientlib starttunnel controller run exited unexpectedly • status end it looks like this should be handled properly because currently it s generating an empty measurement file see jsonapi request body jsonapi method post jsonapi url http connection to ready sending request post report xxxxxx close http | 1 |
378,104 | 11,196,213,333 | IssuesEvent | 2020-01-03 09:27:22 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.0 staging-1322] Ecopedis: icons troubles | Medium Priority | Separated from the ols issue #14013
[] Work with Rob to get all missing icons in ecopedia
[] Need different icon for rooms and housing here. Make sure no duplicate icons.
[] Duplicate icons, and contract board icon is bad and needs redoing. Use different icon for transport too, a truck or cart
[] Add icons for missing types in laws
| 1.0 | [0.9.0 staging-1322] Ecopedis: icons troubles - Separated from the ols issue #14013
[] Work with Rob to get all missing icons in ecopedia
[] Need different icon for rooms and housing here. Make sure no duplicate icons.
[] Duplicate icons, and contract board icon is bad and needs redoing. Use different icon for transport too, a truck or cart
[] Add icons for missing types in laws
| priority | ecopedis icons troubles separated from the ols issue work with rob to get all missing icons in ecopedia need different icon for rooms and housing here make sure no duplicate icons duplicate icons and contract board icon is bad and needs redoing use different icon for transport too a truck or cart add icons for missing types in laws | 1 |
719,726 | 24,768,113,150 | IssuesEvent | 2022-10-22 19:56:12 | tupoy-ya/TupoyeMenu | https://api.github.com/repos/tupoy-ya/TupoyeMenu | opened | [Request]: Hud color selector | enhancement medium priority | ### Problem
Currently you can't select your hud colors, for example change the default `freemode` blue color in your hud to red.
### Solution
Add hud color selector or maybe just change colors inside a wren script when it's merged.
### Reason
Same as problem why are there two fields for that.
### Additional context
Cpp
```cpp
HUD::REPLACE_HUD_COLOUR_WITH_RGBA(int hudColorIndex, int r, int g, int b, int a); // 0xF314CF4F0211894E
```
Wren
```wren
HUD.REPLACE_HUD_COLOUR_WITH_RGBA(hudColorIndex, r, g, b, a) // 0xF314CF4F0211894E
``` | 1.0 | [Request]: Hud color selector - ### Problem
Currently you can't select your hud colors, for example change the default `freemode` blue color in your hud to red.
### Solution
Add hud color selector or maybe just change colors inside a wren script when it's merged.
### Reason
Same as problem why are there two fields for that.
### Additional context
Cpp
```cpp
HUD::REPLACE_HUD_COLOUR_WITH_RGBA(int hudColorIndex, int r, int g, int b, int a); // 0xF314CF4F0211894E
```
Wren
```wren
HUD.REPLACE_HUD_COLOUR_WITH_RGBA(hudColorIndex, r, g, b, a) // 0xF314CF4F0211894E
``` | priority | hud color selector problem currently you can t select your hud colors for example change the default freemode blue color in your hud to red solution add hud color selector or maybe just change colors inside a wren script when it s merged reason same as problem why are there two fields for that additional context cpp cpp hud replace hud colour with rgba int hudcolorindex int r int g int b int a wren wren hud replace hud colour with rgba hudcolorindex r g b a | 1 |
30,074 | 2,722,154,606 | IssuesEvent | 2015-04-14 00:28:48 | CruxFramework/crux-smart-faces | https://api.github.com/repos/CruxFramework/crux-smart-faces | closed | Problem in the Filter component | bug imported Milestone-M14-C4 Module-CruxWidgets Priority-Medium TargetVersion-5.3.0 | _From [flavia.jesus@triggolabs.com](https://code.google.com/u/flavia.jesus@triggolabs.com/) on March 25, 2015 15:59:15_
Running through suggested options by Filter does not happen a necessary emphasis on identifying the current position.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=657_ | 1.0 | Problem in the Filter component - _From [flavia.jesus@triggolabs.com](https://code.google.com/u/flavia.jesus@triggolabs.com/) on March 25, 2015 15:59:15_
Running through suggested options by Filter does not happen a necessary emphasis on identifying the current position.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=657_ | priority | problem in the filter component from on march running through suggested options by filter does not happen a necessary emphasis on identifying the current position original issue | 1 |
176,373 | 6,558,743,910 | IssuesEvent | 2017-09-06 22:59:14 | qlicker/qlicker | https://api.github.com/repos/qlicker/qlicker | reopened | User can't verify email if domain enforced | bug Medium priority | If an admin adds a user from a different domain than those allowed (by temporarily lifting the restriction), then the verify email sent to that user does not work (although the user can still log in). It becomes impossible for that user to verify their email (the admin has to do it). | 1.0 | User can't verify email if domain enforced - If an admin adds a user from a different domain than those allowed (by temporarily lifting the restriction), then the verify email sent to that user does not work (although the user can still log in). It becomes impossible for that user to verify their email (the admin has to do it). | priority | user can t verify email if domain enforced if an admin adds a user from a different domain than those allowed by temporarily lifting the restriction then the verify email sent to that user does not work although the user can still log in it becomes impossible for that user to verify their email the admin has to do it | 1 |
486,667 | 14,012,457,172 | IssuesEvent | 2020-10-29 09:04:02 | AY2021S1-CS2103T-W16-3/tp | https://api.github.com/repos/AY2021S1-CS2103T-W16-3/tp | closed | Make Date optional for certain commands | priority.medium :2nd_place_medal: type.enhancement :+1: | If there is no date inputted for commands such as `AddExpenseCommand`, `AddIncomeCommand`, `ConvertBookmarkIncomeCommand` and `ConvertBookmarkExpenseCommand`, the current date will be used. | 1.0 | Make Date optional for certain commands - If there is no date inputted for commands such as `AddExpenseCommand`, `AddIncomeCommand`, `ConvertBookmarkIncomeCommand` and `ConvertBookmarkExpenseCommand`, the current date will be used. | priority | make date optional for certain commands if there is no date inputted for commands such as addexpensecommand addincomecommand convertbookmarkincomecommand and convertbookmarkexpensecommand the current date will be used | 1 |
69,903 | 3,316,300,148 | IssuesEvent | 2015-11-06 16:20:11 | TeselaGen/ve | https://api.github.com/repos/TeselaGen/ve | closed | Master Lists not relevant for downstream automation tab | Customer: DAS Phase I Milestone #4 - Oracle Rewrite Priority: Medium Status: Active Type: Bug | Master Lists interface not relevant for inclusion in downstream automation tab. Please remove.
(found during live demo at DAS!) | 1.0 | Master Lists not relevant for downstream automation tab - Master Lists interface not relevant for inclusion in downstream automation tab. Please remove.
(found during live demo at DAS!) | priority | master lists not relevant for downstream automation tab master lists interface not relevant for inclusion in downstream automation tab please remove found during live demo at das | 1 |
87,721 | 3,757,310,862 | IssuesEvent | 2016-03-13 22:15:12 | Heteroskedastic/Dr-referral-tracker | https://api.github.com/repos/Heteroskedastic/Dr-referral-tracker | closed | Refactor Physician to ReferringEntity | enhancement Medium Priority | ReferringEntity
- organization
- add title
- add timestamp
- physician_name to ReferringEntity_name (I don't care exactly what you call this, re_name is fine, smae with the rest of the fields)
- physician_phone to ReferringEntity_phone
- physician_email to ReferringEntity_email
- referral_special ? How do we deal with predefined Current Patient(not a referral) and Social(no name)
| 1.0 | Refactor Physician to ReferringEntity - ReferringEntity
- organization
- add title
- add timestamp
- physician_name to ReferringEntity_name (I don't care exactly what you call this, re_name is fine, smae with the rest of the fields)
- physician_phone to ReferringEntity_phone
- physician_email to ReferringEntity_email
- referral_special ? How do we deal with predefined Current Patient(not a referral) and Social(no name)
| priority | refactor physician to referringentity referringentity organization add title add timestamp physician name to referringentity name i don t care exactly what you call this re name is fine smae with the rest of the fields physician phone to referringentity phone physician email to referringentity email referral special how do we deal with predefined current patient not a referral and social no name | 1 |
154,464 | 5,919,422,078 | IssuesEvent | 2017-05-22 17:40:30 | mkdo/kapow-setup | https://api.github.com/repos/mkdo/kapow-setup | opened | Refactor into a question based workflow | Priority: Medium Status: Pending | There are a number of optional things that we would ideally want to do upon installing a new Kapow! instance. As a result the script needs to be overhauled to take a more question based approach, similar to the way Gary Jones' plugin deployment script works.
Some discussion is needed around which specific options need to be offered to the user. | 1.0 | Refactor into a question based workflow - There are a number of optional things that we would ideally want to do upon installing a new Kapow! instance. As a result the script needs to be overhauled to take a more question based approach, similar to the way Gary Jones' plugin deployment script works.
Some discussion is needed around which specific options need to be offered to the user. | priority | refactor into a question based workflow there are a number of optional things that we would ideally want to do upon installing a new kapow instance as a result the script needs to be overhauled to take a more question based approach similar to the way gary jones plugin deployment script works some discussion is needed around which specific options need to be offered to the user | 1 |
537,256 | 15,726,168,650 | IssuesEvent | 2021-03-29 10:56:35 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | django model constraints inadequate | Medium Priority Resource Model bug | The current crisis is a symptom of a larger problem. **It should not be possible to enter nonsensical metadata into Django.** Addressing this is a matter of
1. Adding reasonable constraints to the metadata models so that this is not possible.
2. Where necessary, slightly redesigning the models so that constraints are easier to define. | 1.0 | django model constraints inadequate - The current crisis is a symptom of a larger problem. **It should not be possible to enter nonsensical metadata into Django.** Addressing this is a matter of
1. Adding reasonable constraints to the metadata models so that this is not possible.
2. Where necessary, slightly redesigning the models so that constraints are easier to define. | priority | django model constraints inadequate the current crisis is a symptom of a larger problem it should not be possible to enter nonsensical metadata into django addressing this is a matter of adding reasonable constraints to the metadata models so that this is not possible where necessary slightly redesigning the models so that constraints are easier to define | 1 |
388,600 | 11,489,864,375 | IssuesEvent | 2020-02-11 16:09:38 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Design QA: Trading Page - Order Book MOBILE | Needed for V2 launch Priority: Medium | Design: https://www.figma.com/file/fLWVwmanAwetVZbujQquEi/Market-Page?node-id=184%3A3613
- [x] Reference desktop ticket for majority of changes that should get pushed through to mobile 4703
- [x] Increase row height to 24px. Lets test this but may need to go even bigger to easily select a row
| 1.0 | Design QA: Trading Page - Order Book MOBILE - Design: https://www.figma.com/file/fLWVwmanAwetVZbujQquEi/Market-Page?node-id=184%3A3613
- [x] Reference desktop ticket for majority of changes that should get pushed through to mobile 4703
- [x] Increase row height to 24px. Lets test this but may need to go even bigger to easily select a row
| priority | design qa trading page order book mobile design reference desktop ticket for majority of changes that should get pushed through to mobile increase row height to lets test this but may need to go even bigger to easily select a row | 1 |
718,245 | 24,708,939,439 | IssuesEvent | 2022-10-19 21:51:41 | bounswe/bounswe2022group6 | https://api.github.com/repos/bounswe/bounswe2022group6 | closed | Initializing Backend Framework | Priority: Medium State: In Progress Type: Development | Framework for Backend application should be initialized. At [Meeting #3](https://github.com/bounswe/bounswe2022group6/wiki/Meeting-%233-18.10.2022), it is decided that we will use Django v4.1. | 1.0 | Initializing Backend Framework - Framework for Backend application should be initialized. At [Meeting #3](https://github.com/bounswe/bounswe2022group6/wiki/Meeting-%233-18.10.2022), it is decided that we will use Django v4.1. | priority | initializing backend framework framework for backend application should be initialized at it is decided that we will use django | 1 |
224,055 | 7,466,078,685 | IssuesEvent | 2018-04-02 08:44:55 | olpeh/wht | https://api.github.com/repos/olpeh/wht | opened | UI Bug(s) reported by a user | bug medium-priority | Bug report from a user:
In the current version calling an entry changes immediately the start date and start time, leading to wrong duration value. If no changes are made, the cancel button reverts everything, but it is very hard if corrections are intended.
Also, when creating new entries for the past, i.e. last week, start time is set to current time even if this option is unchecked. So I have to modify 4 fields! At least 2 fields should adapt automagically according to the default duration time.
However, the comment field keeps it's value across all projects, but comments normally belongs to a specific entry, and I think it would be better to reset this field for each instance. | 1.0 | UI Bug(s) reported by a user - Bug report from a user:
In the current version calling an entry changes immediately the start date and start time, leading to wrong duration value. If no changes are made, the cancel button reverts everything, but it is very hard if corrections are intended.
Also, when creating new entries for the past, i.e. last week, start time is set to current time even if this option is unchecked. So I have to modify 4 fields! At least 2 fields should adapt automagically according to the default duration time.
However, the comment field keeps it's value across all projects, but comments normally belongs to a specific entry, and I think it would be better to reset this field for each instance. | priority | ui bug s reported by a user bug report from a user in the current version calling an entry changes immediately the start date and start time leading to wrong duration value if no changes are made the cancel button reverts everything but it is very hard if corrections are intended also when creating new entries for the past i e last week start time is set to current time even if this option is unchecked so i have to modify fields at least fields should adapt automagically according to the default duration time however the comment field keeps it s value across all projects but comments normally belongs to a specific entry and i think it would be better to reset this field for each instance | 1 |
40,671 | 2,868,935,336 | IssuesEvent | 2015-06-05 22:03:27 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Request to host package - bot v0.70 | Fixed Priority-Medium Pub-HostRequest Type-Task | <a href="https://github.com/kevmoo"><img src="https://avatars.githubusercontent.com/u/17034?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [kevmoo](https://github.com/kevmoo)**
_Originally opened as dart-lang/sdk#6992_
----
https://github.com/kevmoo/bot.dart/tree/v0.7.0
Thanks! | 1.0 | Request to host package - bot v0.70 - <a href="https://github.com/kevmoo"><img src="https://avatars.githubusercontent.com/u/17034?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [kevmoo](https://github.com/kevmoo)**
_Originally opened as dart-lang/sdk#6992_
----
https://github.com/kevmoo/bot.dart/tree/v0.7.0
Thanks! | priority | request to host package bot issue by originally opened as dart lang sdk thanks | 1 |
614,732 | 19,188,675,271 | IssuesEvent | 2021-12-05 16:34:55 | alerta/alerta | https://api.github.com/repos/alerta/alerta | closed | Empty string inserting into Postgres `blackouts` table (should be NULL) | bug priority: medium | **Issue Summary**
I've been testing blackouts and I found that sometimes the blackout condition wasn't being met so a notification was being generated when it shouldn't have.
Looking through the `is_blackout_period` method it looks like the SELECT statement is expecting certain columns to be NULL.
I inspected the `blackouts` table in the database and found that the columns were empty strings vs NULL:
e.g.
```
postgres@[local]alertad=# \pset null 'NULLZ'
Null display is "NULLZ".
postgres@[local]alertad=# select * from blackouts;
-[ RECORD 1 ]-------------------------------------
id | 34ae6ef1-9aaa-462c-9f1b-2c661bef7b7e
priority | 6
environment | Production
service | {}
resource |
event |
group | NULLZ
tags | {dbaautomation}
customer | dbsa
start_time | 2021-06-09 06:00:00
end_time | 2021-06-09 14:15:00
duration | 29700
user | matt
create_time | 2021-06-09 13:11:32.45
text |
origin | NULLZ
```
which explains why the SELECT statement isn't being matched.
**Environment**
- OS: Centos 7.9
- API version: 8.5.0
- Deployment: self-hosted
- For self-hosted, WSGI environment: nginx/uwsgi
- Database: Postgres 10.4
- Server config:
Auth enabled? Yes
Auth provider? LDAP
Customer views? Yes
- web UI version: 8.5.0
- CLI version: 8.5.0
**To Reproduce**
I was unable to reproduce this issue. I thought it had to do with me either copying or updating existing blackouts but further testing didn't result in the empty strings being inserted.
**Expected behavior**
**Screenshots**
**Additional context**
I think modifying the INSERT statement in the `create_blackout` method to evaluate an empty string as a NULL may be an appropriate "safety-check".
e.g.
```
def create_blackout(self, blackout):
insert = """
INSERT INTO blackouts (id, priority, environment, service, resource, event,
"group", tags, origin, customer, start_time, end_time,
duration, "user", create_time, text)
VALUES (%(id)s, %(priority)s, %(environment)s, %(service)s, nullif(%(resource)s,''), nullif(%(event)s,''),
nullif(%(group)s,''), %(tags)s, nullif(%(origin)s,''), %(customer)s, %(start_time)s, %(end_time)s,
%(duration)s, %(user)s, %(create_time)s, %(text)s)
RETURNING *, duration AS remaining
"""
return self._insert(insert, vars(blackout))
```
NOTE: Please provide as much information about your issue as possible.
Failure to provide basic details about your specific environment make
it impossible to know if an issue has already been fixed, can delay a
response and may result in your issue being closed without a resolution.
| 1.0 | Empty string inserting into Postgres `blackouts` table (should be NULL) - **Issue Summary**
I've been testing blackouts and I found that sometimes the blackout condition wasn't being met so a notification was being generated when it shouldn't have.
Looking through the `is_blackout_period` method it looks like the SELECT statement is expecting certain columns to be NULL.
I inspected the `blackouts` table in the database and found that the columns were empty strings vs NULL:
e.g.
```
postgres@[local]alertad=# \pset null 'NULLZ'
Null display is "NULLZ".
postgres@[local]alertad=# select * from blackouts;
-[ RECORD 1 ]-------------------------------------
id | 34ae6ef1-9aaa-462c-9f1b-2c661bef7b7e
priority | 6
environment | Production
service | {}
resource |
event |
group | NULLZ
tags | {dbaautomation}
customer | dbsa
start_time | 2021-06-09 06:00:00
end_time | 2021-06-09 14:15:00
duration | 29700
user | matt
create_time | 2021-06-09 13:11:32.45
text |
origin | NULLZ
```
which explains why the SELECT statement isn't being matched.
**Environment**
- OS: Centos 7.9
- API version: 8.5.0
- Deployment: self-hosted
- For self-hosted, WSGI environment: nginx/uwsgi
- Database: Postgres 10.4
- Server config:
Auth enabled? Yes
Auth provider? LDAP
Customer views? Yes
- web UI version: 8.5.0
- CLI version: 8.5.0
**To Reproduce**
I was unable to reproduce this issue. I thought it had to do with me either copying or updating existing blackouts but further testing didn't result in the empty strings being inserted.
**Expected behavior**
**Screenshots**
**Additional context**
I think modifying the INSERT statement in the `create_blackout` method to evaluate an empty string as a NULL may be an appropriate "safety-check".
e.g.
```
def create_blackout(self, blackout):
insert = """
INSERT INTO blackouts (id, priority, environment, service, resource, event,
"group", tags, origin, customer, start_time, end_time,
duration, "user", create_time, text)
VALUES (%(id)s, %(priority)s, %(environment)s, %(service)s, nullif(%(resource)s,''), nullif(%(event)s,''),
nullif(%(group)s,''), %(tags)s, nullif(%(origin)s,''), %(customer)s, %(start_time)s, %(end_time)s,
%(duration)s, %(user)s, %(create_time)s, %(text)s)
RETURNING *, duration AS remaining
"""
return self._insert(insert, vars(blackout))
```
NOTE: Please provide as much information about your issue as possible.
Failure to provide basic details about your specific environment make
it impossible to know if an issue has already been fixed, can delay a
response and may result in your issue being closed without a resolution.
| priority | empty string inserting into postgres blackouts table should be null issue summary i ve been testing blackouts and i found that sometimes the blackout condition wasn t being met so a notification was being generated when it shouldn t have looking through the is blackout period method it looks like the select statement is expecting certain columns to be null i inspected the blackouts table in the database and found that the columns were empty strings vs null e g postgres alertad pset null nullz null display is nullz postgres alertad select from blackouts id priority environment production service resource event group nullz tags dbaautomation customer dbsa start time end time duration user matt create time text origin nullz which explains why the select statement isn t being matched environment os centos api version deployment self hosted for self hosted wsgi environment nginx uwsgi database postgres server config auth enabled yes auth provider ldap customer views yes web ui version cli version to reproduce i was unable to reproduce this issue i thought it had to do with me either copying or updating existing blackouts but further testing didn t result in the empty strings being inserted expected behavior screenshots additional context i think modifying the insert statement in the create blackout method to evaluate an empty string as a null may be an appropriate safety check e g def create blackout self blackout insert insert into blackouts id priority environment service resource event group tags origin customer start time end time duration user create time text values id s priority s environment s service s nullif resource s nullif event s nullif group s tags s nullif origin s customer s start time s end time s duration s user s create time s text s returning duration as remaining return self insert insert vars blackout note please provide as much information about your issue as possible failure to provide basic details about your specific environment make it impossible to know if an issue has already been fixed can delay a response and may result in your issue being closed without a resolution | 1 |
210,252 | 7,187,327,811 | IssuesEvent | 2018-02-02 04:27:22 | PMEAL/OpenPNM | https://api.github.com/repos/PMEAL/OpenPNM | closed | OpenPNM seems to keep reference of objects even after purging them! | Priority - Medium bug | I use ```purge_object()``` method to delete unused objects. However, once I tracked my memory on Windows Task Manager, it turned out even after purging the object, OpenPNM doesn't free the allocated memory to those objects. I tried to manually free unreferenced objects, but it didn't work. I also tried to change their scope by wrapping the part of the code which instantiates those OpenPNM objects, in a function, but it also didn't work. Basically, what I do is I generate a cubic network first, and then in a for-loop, I clone the network and play around with the cloned version, and finally purge it. Having said that, I think OpenPNM somehow keeps reference even after purging objects. | 1.0 | OpenPNM seems to keep reference of objects even after purging them! - I use ```purge_object()``` method to delete unused objects. However, once I tracked my memory on Windows Task Manager, it turned out even after purging the object, OpenPNM doesn't free the allocated memory to those objects. I tried to manually free unreferenced objects, but it didn't work. I also tried to change their scope by wrapping the part of the code which instantiates those OpenPNM objects, in a function, but it also didn't work. Basically, what I do is I generate a cubic network first, and then in a for-loop, I clone the network and play around with the cloned version, and finally purge it. Having said that, I think OpenPNM somehow keeps reference even after purging objects. | priority | openpnm seems to keep reference of objects even after purging them i use purge object method to delete unused objects however once i tracked my memory on windows task manager it turned out even after purging the object openpnm doesn t free the allocated memory to those objects i tried to manually free unreferenced objects but it didn t work i also tried to change their scope by wrapping the part of the code which instantiates those openpnm objects in a function but it also didn t work basically what i do is i generate a cubic network first and then in a for loop i clone the network and play around with the cloned version and finally purge it having said that i think openpnm somehow keeps reference even after purging objects | 1 |
42,157 | 2,869,100,971 | IssuesEvent | 2015-06-05 23:20:38 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | yamlToString should be part of the yaml package | Area-Pkg Pkg-Yaml Priority-Medium Triaged Type-Enhancement | There seems to be an implementation of yamlToString in the pub sources. Can we share it as part of the yaml package? | 1.0 | yamlToString should be part of the yaml package - There seems to be an implementation of yamlToString in the pub sources. Can we share it as part of the yaml package? | priority | yamltostring should be part of the yaml package there seems to be an implementation of yamltostring in the pub sources can we share it as part of the yaml package | 1 |
620,305 | 19,558,718,231 | IssuesEvent | 2022-01-03 13:26:11 | futuredapp/hauler | https://api.github.com/repos/futuredapp/hauler | closed | Black status bar and blinking navigation bar after multitasking | Priority: Medium Status: Help wanted Type: Bug | Hi!
I just started using Hauler and implemented a basic prototype according to the instructions in the readme. On first open it works as expected, but if switching app and going back (going to the multitask app picker and select my app again) the status bar gets black and the navigation bar gets black or starts blinking. I've not used black anywhere in the theme, and the blinking is definitely not supposed to happen. :)
I can provide screenshots and code if neccessary. Tested on Pixel 4XL, android 10.
Thanks for making this excellent library! :) | 1.0 | Black status bar and blinking navigation bar after multitasking - Hi!
I just started using Hauler and implemented a basic prototype according to the instructions in the readme. On first open it works as expected, but if switching app and going back (going to the multitask app picker and select my app again) the status bar gets black and the navigation bar gets black or starts blinking. I've not used black anywhere in the theme, and the blinking is definitely not supposed to happen. :)
I can provide screenshots and code if neccessary. Tested on Pixel 4XL, android 10.
Thanks for making this excellent library! :) | priority | black status bar and blinking navigation bar after multitasking hi i just started using hauler and implemented a basic prototype according to the instructions in the readme on first open it works as expected but if switching app and going back going to the multitask app picker and select my app again the status bar gets black and the navigation bar gets black or starts blinking i ve not used black anywhere in the theme and the blinking is definitely not supposed to happen i can provide screenshots and code if neccessary tested on pixel android thanks for making this excellent library | 1 |
806,113 | 29,801,626,601 | IssuesEvent | 2023-06-16 08:33:02 | OpenBioML/chemnlp | https://api.github.com/repos/OpenBioML/chemnlp | opened | Move data mixing inside training pipeline | work package: model training priority: medium | The current creation of mixed dataset involves creating datasets with specific proportions out of already tokenised datasets and saving them down to disk. We want to move this configuration into the training pipeline so we can dynamically allocated proportions for training.
1. Creating large datasets with duplicate samples is inefficient storage-wise
2. We want more flexibility around grid searching these splits dynamically
3. It's also very slow to create these mixtures, approx 2 hours per 4B tokens | 1.0 | Move data mixing inside training pipeline - The current creation of mixed dataset involves creating datasets with specific proportions out of already tokenised datasets and saving them down to disk. We want to move this configuration into the training pipeline so we can dynamically allocated proportions for training.
1. Creating large datasets with duplicate samples is inefficient storage-wise
2. We want more flexibility around grid searching these splits dynamically
3. It's also very slow to create these mixtures, approx 2 hours per 4B tokens | priority | move data mixing inside training pipeline the current creation of mixed dataset involves creating datasets with specific proportions out of already tokenised datasets and saving them down to disk we want to move this configuration into the training pipeline so we can dynamically allocated proportions for training creating large datasets with duplicate samples is inefficient storage wise we want more flexibility around grid searching these splits dynamically it s also very slow to create these mixtures approx hours per tokens | 1 |
49,516 | 3,003,276,233 | IssuesEvent | 2015-07-24 22:25:54 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Groups UI: When creating a group that has only other groups as members, membership column has typo | Priority: Medium Status: Dev Type: Bug |
shows " , 3 groups" since there are no users, which would normally show: "2 users, 1 group" | 1.0 | Groups UI: When creating a group that has only other groups as members, membership column has typo -
shows " , 3 groups" since there are no users, which would normally show: "2 users, 1 group" | priority | groups ui when creating a group that has only other groups as members membership column has typo shows groups since there are no users which would normally show users group | 1 |
256,254 | 8,127,197,843 | IssuesEvent | 2018-08-17 07:05:05 | codephil-columbia/typephil | https://api.github.com/repos/codephil-columbia/typephil | closed | Current Progress Bar Doesn't Work | Medium Priority | Steps to Reproduce: Current Progress bar is always 0%. I should be percent completed of the chapter in respect to the lesson. So lesson #/total number of lessons in chapter x.

| 1.0 | Current Progress Bar Doesn't Work - Steps to Reproduce: Current Progress bar is always 0%. I should be percent completed of the chapter in respect to the lesson. So lesson #/total number of lessons in chapter x.

| priority | current progress bar doesn t work steps to reproduce current progress bar is always i should be percent completed of the chapter in respect to the lesson so lesson total number of lessons in chapter x | 1 |
720,454 | 24,793,383,323 | IssuesEvent | 2022-10-24 15:17:59 | AY2223S1-CS2113-W12-2/tp | https://api.github.com/repos/AY2223S1-CS2113-W12-2/tp | closed | As a user, I want to view comments related to my spending habits when requesting for a financial summary | type.Story priority.Medium | ... so that I can be reminded to change my spending habits | 1.0 | As a user, I want to view comments related to my spending habits when requesting for a financial summary - ... so that I can be reminded to change my spending habits | priority | as a user i want to view comments related to my spending habits when requesting for a financial summary so that i can be reminded to change my spending habits | 1 |
161,286 | 6,112,097,925 | IssuesEvent | 2017-06-21 18:32:49 | minio/minio-go | https://api.github.com/repos/minio/minio-go | closed | ListBuckets() always sends a request with no region set | priority: medium | https://github.com/minio/minio-go/commit/8c34cc49a0b000ebe755577feab802ee721535ed changed the behaviour of `newRequest()` to only use data retrieved from the `metadata` object; in the case of `ListBuckets()` this object [does not contain this data](https://github.com/minio/minio-go/blob/master/api-list.go#L40), so a request is sent with `region == ""`.
The Minio server (at least the version we're using) returns 400 Bad Request in this case:
```
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>Error parsing the X-Amz-Credential parameter; the region is wrong;</Message>
[...]
</Error>
```
Prior to the above commit, `newRequest()` defaulted to `us-east-1` or `cn-north-1`, overriding based on metadata rather than making metadata the only source - could this be restored? Perhaps something like:
```go
var location string
// Gather location only if bucketName is present.
if metadata.bucketName != "" && metadata.bucketLocation == "" {
location, err = c.getBucketLocation(metadata.bucketName)
if err != nil {
return nil, err
}
} else if metadata.bucketLocation != "" {
location = metadata.bucketLocation
} else if s3utils.IsAmazonChinaEndpoint(c.endpointURL) {
location = "cn-north-1"
} else {
location = "us-east-1"
}
``` | 1.0 | ListBuckets() always sends a request with no region set - https://github.com/minio/minio-go/commit/8c34cc49a0b000ebe755577feab802ee721535ed changed the behaviour of `newRequest()` to only use data retrieved from the `metadata` object; in the case of `ListBuckets()` this object [does not contain this data](https://github.com/minio/minio-go/blob/master/api-list.go#L40), so a request is sent with `region == ""`.
The Minio server (at least the version we're using) returns 400 Bad Request in this case:
```
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>Error parsing the X-Amz-Credential parameter; the region is wrong;</Message>
[...]
</Error>
```
Prior to the above commit, `newRequest()` defaulted to `us-east-1` or `cn-north-1`, overriding based on metadata rather than making metadata the only source - could this be restored? Perhaps something like:
```go
var location string
// Gather location only if bucketName is present.
if metadata.bucketName != "" && metadata.bucketLocation == "" {
location, err = c.getBucketLocation(metadata.bucketName)
if err != nil {
return nil, err
}
} else if metadata.bucketLocation != "" {
location = metadata.bucketLocation
} else if s3utils.IsAmazonChinaEndpoint(c.endpointURL) {
location = "cn-north-1"
} else {
location = "us-east-1"
}
``` | priority | listbuckets always sends a request with no region set changed the behaviour of newrequest to only use data retrieved from the metadata object in the case of listbuckets this object so a request is sent with region the minio server at least the version we re using returns bad request in this case authorizationqueryparameterserror error parsing the x amz credential parameter the region is wrong prior to the above commit newrequest defaulted to us east or cn north overriding based on metadata rather than making metadata the only source could this be restored perhaps something like go var location string gather location only if bucketname is present if metadata bucketname metadata bucketlocation location err c getbucketlocation metadata bucketname if err nil return nil err else if metadata bucketlocation location metadata bucketlocation else if isamazonchinaendpoint c endpointurl location cn north else location us east | 1 |
272,548 | 8,514,702,310 | IssuesEvent | 2018-10-31 19:19:59 | ansible/awx | https://api.github.com/repos/ansible/awx | opened | Potentially reap obsolete schedules | component:api component:ui priority:medium type:enhancement | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- API
- UI
##### SUMMARY
Schedules can have an end-by date, or an "end after 20 runs" configuration.
It may be useful to have a process (management job?) that automatically deletes all "expired" schedules.
##### ADDITIONAL INFORMATION
| 1.0 | Potentially reap obsolete schedules - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- API
- UI
##### SUMMARY
Schedules can have an end-by date, or an "end after 20 runs" configuration.
It may be useful to have a process (management job?) that automatically deletes all "expired" schedules.
##### ADDITIONAL INFORMATION
| priority | potentially reap obsolete schedules issue type feature idea component name api ui summary schedules can have an end by date or an end after runs configuration it may be useful to have a process management job that automatically deletes all expired schedules additional information | 1 |
686,681 | 23,501,075,699 | IssuesEvent | 2022-08-18 08:28:53 | kubesphere/console | https://api.github.com/repos/kubesphere/console | closed | When accessing an edge node, the page is blank and no content is displayed | kind/bug kind/need-to-verify priority/medium | **Describe the bug**

**Versions used(KubeSphere/Kubernetes)**
KubeSphere: `v3.3.0`
| 1.0 | When accessing an edge node, the page is blank and no content is displayed - **Describe the bug**

**Versions used(KubeSphere/Kubernetes)**
KubeSphere: `v3.3.0`
| priority | when accessing an edge node the page is blank and no content is displayed describe the bug versions used kubesphere kubernetes kubesphere | 1 |
224,485 | 7,471,032,896 | IssuesEvent | 2018-04-03 07:53:44 | minio/minio | https://api.github.com/repos/minio/minio | opened | ListObject returns incorrect storageClass | priority: medium | ## Expected Behavior
ListObject returns correct StorageClass of the object
## Current Behavior
ListObject returns `STANDARD` storageClass for objects set with `REDUCED_REDUNDANCY`.
## Possible Solution
Read storageClass set in metadata before listing objects
## Steps to Reproduce (for bugs)
1. Upload an object with storageClass `REDUCED_REDUNDANCY`.
2. Perform ListObject
## Context
Code Review
| 1.0 | ListObject returns incorrect storageClass - ## Expected Behavior
ListObject returns correct StorageClass of the object
## Current Behavior
ListObject returns `STANDARD` storageClass for objects set with `REDUCED_REDUNDANCY`.
## Possible Solution
Read storageClass set in metadata before listing objects
## Steps to Reproduce (for bugs)
1. Upload an object with storageClass `REDUCED_REDUNDANCY`.
2. Perform ListObject
## Context
Code Review
| priority | listobject returns incorrect storageclass expected behavior listobject returns correct storageclass of the object current behavior listobject returns standard storageclass for objects set with reduced redundancy possible solution read storageclass set in metadata before listing objects steps to reproduce for bugs upload an object with storageclass reduced redundancy perform listobject context code review | 1 |
492,632 | 14,216,692,013 | IssuesEvent | 2020-11-17 09:20:37 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Make clicking a link insta-close the tooltip and not reopen immediately | Category: UI Priority: Medium | So you dont get blocking tooltips like this:

| 1.0 | Make clicking a link insta-close the tooltip and not reopen immediately - So you dont get blocking tooltips like this:

| priority | make clicking a link insta close the tooltip and not reopen immediately so you dont get blocking tooltips like this | 1 |
191,029 | 6,824,931,274 | IssuesEvent | 2017-11-08 08:42:22 | R-and-LaTeX/CorsoDiLatex | https://api.github.com/repos/R-and-LaTeX/CorsoDiLatex | closed | Caricare materiale senza soluzioni su moodle per la prima lezione | priority:medium | **Description**
Quando le slide
**Other issues**
prima deve essere fatta la issue #65 | 1.0 | Caricare materiale senza soluzioni su moodle per la prima lezione - **Description**
Quando le slide
**Other issues**
prima deve essere fatta la issue #65 | priority | caricare materiale senza soluzioni su moodle per la prima lezione description quando le slide other issues prima deve essere fatta la issue | 1 |
305,377 | 9,368,491,167 | IssuesEvent | 2019-04-03 08:50:03 | SatelliteQE/robottelo | https://api.github.com/repos/SatelliteQE/robottelo | closed | [Web UI] Test scenario around Dashboard : Subscriptions Status , Sync overview | 6.5 Medium Priority UI | Dashboard is entity where every Customer take a look at very first and see the status of every thing. A small thing(Bug) may mislead to Customer on this one during his/her day to day task. It would be great to add some more test around it i.e. Subscriptions Status , Sync overview | 1.0 | [Web UI] Test scenario around Dashboard : Subscriptions Status , Sync overview - Dashboard is entity where every Customer take a look at very first and see the status of every thing. A small thing(Bug) may mislead to Customer on this one during his/her day to day task. It would be great to add some more test around it i.e. Subscriptions Status , Sync overview | priority | test scenario around dashboard subscriptions status sync overview dashboard is entity where every customer take a look at very first and see the status of every thing a small thing bug may mislead to customer on this one during his her day to day task it would be great to add some more test around it i e subscriptions status sync overview | 1 |
497,160 | 14,364,778,883 | IssuesEvent | 2020-12-01 00:11:44 | uNetworking/uWebSockets | https://api.github.com/repos/uNetworking/uWebSockets | closed | Fix pedantic warnings | medium priority | Hi,
first of all, thanks for this great library.
When i compile it, i noticed that it produces one warning, which spams the logs a bit, because there are many references after it.
```
lib\uWebSockets\src\WebSocketContext.h(326,47): warning C4018: '<': signed/unsigned mismatch
```
It's no big deal an not crucial, but i just wanted to inform you that the warning is produced in case you didn't notice.
I am using MSVC 19 (C++20 standard).
Greetings | 1.0 | Fix pedantic warnings - Hi,
first of all, thanks for this great library.
When i compile it, i noticed that it produces one warning, which spams the logs a bit, because there are many references after it.
```
lib\uWebSockets\src\WebSocketContext.h(326,47): warning C4018: '<': signed/unsigned mismatch
```
It's no big deal an not crucial, but i just wanted to inform you that the warning is produced in case you didn't notice.
I am using MSVC 19 (C++20 standard).
Greetings | priority | fix pedantic warnings hi first of all thanks for this great library when i compile it i noticed that it produces one warning which spams the logs a bit because there are many references after it lib uwebsockets src websocketcontext h warning signed unsigned mismatch it s no big deal an not crucial but i just wanted to inform you that the warning is produced in case you didn t notice i am using msvc c standard greetings | 1 |
804,785 | 29,501,256,880 | IssuesEvent | 2023-06-02 22:09:38 | aws/s2n-tls | https://api.github.com/repos/aws/s2n-tls | closed | Crash on thread termination when s2n has been unloaded | priority/high size/medium | ### Problem:
I've encountered a couple of issues with s2n's thread local state clean up. The first is a crash on thread exit.
Pre-requisites: s2n built as a shared object on a glibc-based system
The following simple C++ program crashes while resolving the call to helper1.join():
```
#include <thread>
#include <dlfcn.h>
void foo()
{
void *s2n_so = dlopen("<path to libs2n.so>", RTLD_NOW);
int (*s2n_init)(void) = NULL;
*(void **)(&s2n_init) = dlsym(s2n_so, "s2n_init");
int (*s2n_cleanup)(void) = NULL;
*(void **)(&s2n_cleanup) = dlsym(s2n_so, "s2n_cleanup");
(*s2n_init)();
(*s2n_cleanup)();
dlclose(s2n_so);
}
int main(int argc, char *argv[])
{
std::thread helper1(foo);
helper1.join();
return 0;
}
```
The crash call stack looks like:
```
#0 0x00007ffff6d8ee79 in ?? ()
#1 0x00007ffff7891711 in __GI___nptl_deallocate_tsd () at ./nptl/nptl_deallocate_tsd.c:73
#2 __GI___nptl_deallocate_tsd () at ./nptl/nptl_deallocate_tsd.c:22
#3 0x00007ffff78949ca in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:453
#4 0x00007ffff7926a00 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
In particular, nptl_deallocate_ts.c:73 is the place in glibc where thread local slot destructors are invoked (https://fossies.org/linux/glibc/nptl/nptl_deallocate_tsd.c). In this case, the destructor call refers to code that has been unloaded due to the call to dlclose().
The root cause is the initialization of `s2n_per_thread_rand_state_key` in `s2n_drbg_make_rand_state_key`: https://github.com/aws/s2n-tls/blob/v1.3.43/utils/s2n_random.c#L147. The slot is allocated with a destructor, but even if s2n is shutdown properly beforehand, the destructor is still getting invoked at thread end, after the shared object unload.
The setup may seem contrived, but it's actually a simplification of a scenario we are experiencing when a managed runtime (like node) is using a native module (the CRT, that includes s2n) in a worker thread. Before terminating itself, the worker thread unloads the module which in turn unloads s2n.
### Solution:
https://github.com/aws/s2n-tls/pull/3988
Since destructors are only called if the slot contains a non-null value, my tentative fix proposal is to zero the slot in `s2n_rand_cleanup_thread` after the bits have been wiped. The crash disappears once this is done.
I don't understand s2n's thread local storage usage well enough to know if that is sufficient or if there is a potential for `s2n_rand_cleanup_thread` to be called while the key is uninitialized (so perhaps don't fail if the `pthread_setspecific` call fails).
* **Does this change what S2N sends over the wire?** No
* **Does this change any public APIs?** No
* **Which versions of TLS will this impact?** All/universal
### Requirements / Acceptance Criteria:
* **Testing:**
Ideally a standalone test that essentially repeats the above program and doesn't crash would be a useful test. Converting the std::thread to pthreads API would let the test stay pure C. Since it requires a known fixed path to a shared build of s2n (and no link time dependency), the setup may be a bit messier than existing tests.
### Out of scope:
N/A
| 1.0 | Crash on thread termination when s2n has been unloaded - ### Problem:
I've encountered a couple of issues with s2n's thread local state clean up. The first is a crash on thread exit.
Pre-requisites: s2n built as a shared object on a glibc-based system
The following simple C++ program crashes while resolving the call to helper1.join():
```
#include <thread>
#include <dlfcn.h>
void foo()
{
void *s2n_so = dlopen("<path to libs2n.so>", RTLD_NOW);
int (*s2n_init)(void) = NULL;
*(void **)(&s2n_init) = dlsym(s2n_so, "s2n_init");
int (*s2n_cleanup)(void) = NULL;
*(void **)(&s2n_cleanup) = dlsym(s2n_so, "s2n_cleanup");
(*s2n_init)();
(*s2n_cleanup)();
dlclose(s2n_so);
}
int main(int argc, char *argv[])
{
std::thread helper1(foo);
helper1.join();
return 0;
}
```
The crash call stack looks like:
```
#0 0x00007ffff6d8ee79 in ?? ()
#1 0x00007ffff7891711 in __GI___nptl_deallocate_tsd () at ./nptl/nptl_deallocate_tsd.c:73
#2 __GI___nptl_deallocate_tsd () at ./nptl/nptl_deallocate_tsd.c:22
#3 0x00007ffff78949ca in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:453
#4 0x00007ffff7926a00 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
In particular, nptl_deallocate_ts.c:73 is the place in glibc where thread local slot destructors are invoked (https://fossies.org/linux/glibc/nptl/nptl_deallocate_tsd.c). In this case, the destructor call refers to code that has been unloaded due to the call to dlclose().
The root cause is the initialization of `s2n_per_thread_rand_state_key` in `s2n_drbg_make_rand_state_key`: https://github.com/aws/s2n-tls/blob/v1.3.43/utils/s2n_random.c#L147. The slot is allocated with a destructor, but even if s2n is shutdown properly beforehand, the destructor is still getting invoked at thread end, after the shared object unload.
The setup may seem contrived, but it's actually a simplification of a scenario we are experiencing when a managed runtime (like node) is using a native module (the CRT, that includes s2n) in a worker thread. Before terminating itself, the worker thread unloads the module which in turn unloads s2n.
### Solution:
https://github.com/aws/s2n-tls/pull/3988
Since destructors are only called if the slot contains a non-null value, my tentative fix proposal is to zero the slot in `s2n_rand_cleanup_thread` after the bits have been wiped. The crash disappears once this is done.
I don't understand s2n's thread local storage usage well enough to know if that is sufficient or if there is a potential for `s2n_rand_cleanup_thread` to be called while the key is uninitialized (so perhaps don't fail if the `pthread_setspecific` call fails).
* **Does this change what S2N sends over the wire?** No
* **Does this change any public APIs?** No
* **Which versions of TLS will this impact?** All/universal
### Requirements / Acceptance Criteria:
* **Testing:**
Ideally a standalone test that essentially repeats the above program and doesn't crash would be a useful test. Converting the std::thread to pthreads API would let the test stay pure C. Since it requires a known fixed path to a shared build of s2n (and no link time dependency), the setup may be a bit messier than existing tests.
### Out of scope:
N/A
| priority | crash on thread termination when has been unloaded problem i ve encountered a couple of issues with s thread local state clean up the first is a crash on thread exit pre requisites built as a shared object on a glibc based system the following simple c program crashes while resolving the call to join include include void foo void so dlopen rtld now int init void null void init dlsym so init int cleanup void null void cleanup dlsym so cleanup init cleanup dlclose so int main int argc char argv std thread foo join return the crash call stack looks like in in gi nptl deallocate tsd at nptl nptl deallocate tsd c gi nptl deallocate tsd at nptl nptl deallocate tsd c in start thread arg at nptl pthread create c in at sysdeps unix sysv linux s in particular nptl deallocate ts c is the place in glibc where thread local slot destructors are invoked in this case the destructor call refers to code that has been unloaded due to the call to dlclose the root cause is the initialization of per thread rand state key in drbg make rand state key the slot is allocated with a destructor but even if is shutdown properly beforehand the destructor is still getting invoked at thread end after the shared object unload the setup may seem contrived but it s actually a simplification of a scenario we are experiencing when a managed runtime like node is using a native module the crt that includes in a worker thread before terminating itself the worker thread unloads the module which in turn unloads solution since destructors are only called if the slot contains a non null value my tentative fix proposal is to zero the slot in rand cleanup thread after the bits have been wiped the crash disappears once this is done i don t understand s thread local storage usage well enough to know if that is sufficient or if there is a potential for rand cleanup thread to be called while the key is uninitialized so perhaps don t fail if the pthread setspecific call fails does this change what sends over the wire no does this change any public apis no which versions of tls will this impact all universal requirements acceptance criteria testing ideally a standalone test that essentially repeats the above program and doesn t crash would be a useful test converting the std thread to pthreads api would let the test stay pure c since it requires a known fixed path to a shared build of and no link time dependency the setup may be a bit messier than existing tests out of scope n a | 1 |
682,376 | 23,342,732,817 | IssuesEvent | 2022-08-09 15:13:20 | CardinalKit/CardinalKit-Android | https://api.github.com/repos/CardinalKit/CardinalKit-Android | opened | [Tasks] Add Instructional task | Priority 2 - Medium | Need to provide an *instructional* task type, which just displays instructions for the patient to follow and asks them to confirm that they have completed them successfully. | 1.0 | [Tasks] Add Instructional task - Need to provide an *instructional* task type, which just displays instructions for the patient to follow and asks them to confirm that they have completed them successfully. | priority | add instructional task need to provide an instructional task type which just displays instructions for the patient to follow and asks them to confirm that they have completed them successfully | 1 |
595,912 | 18,077,052,403 | IssuesEvent | 2021-09-21 11:02:53 | lea927/drop-that-beat | https://api.github.com/repos/lea927/drop-that-beat | closed | Given that the track is already stored in the db, the system should retrieve the track data. | Priority: Medium State: Backlog Type: Feature | ### Task
- [x] Update track model to include method to check if track is already stored | 1.0 | Given that the track is already stored in the db, the system should retrieve the track data. - ### Task
- [x] Update track model to include method to check if track is already stored | priority | given that the track is already stored in the db the system should retrieve the track data task update track model to include method to check if track is already stored | 1 |
142,989 | 5,487,500,214 | IssuesEvent | 2017-03-14 04:57:40 | fossasia/phimpme-android | https://api.github.com/repos/fossasia/phimpme-android | closed | Discard button - image still gets saved | Bug Priority: Medium | **Actual Behavior**
When we press the discard button after taking a picture, the image still gets saved and shows up in gallery
**Expected Behavior**
Discard button should delete the saved copy
**Steps to reproduce it**
1. Take a picture.
2. discard it.
3. go to gallery tab, the image is still saved.
**Would you like to work on the issue?**
Yes, I would like to work on it.
| 1.0 | Discard button - image still gets saved - **Actual Behavior**
When we press the discard button after taking a picture, the image still gets saved and shows up in gallery
**Expected Behavior**
Discard button should delete the saved copy
**Steps to reproduce it**
1. Take a picture.
2. discard it.
3. go to gallery tab, the image is still saved.
**Would you like to work on the issue?**
Yes, I would like to work on it.
| priority | discard button image still gets saved actual behavior when we press the discard button after taking a picture the image still gets saved and shows up in gallery expected behavior discard button should delete the saved copy steps to reproduce it take a picture discard it go to gallery tab the image is still saved would you like to work on the issue yes i would like to work on it | 1 |
17,006 | 2,615,128,898 | IssuesEvent | 2015-03-01 05:58:23 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | opened | AbstractGoogleAuthorizationCodeServlet | auto-migrated Component-Auth Priority-Medium Type-Enhancement | ```
External references, such as a standards document, or specification?
http://javadoc.google-oauth-java-client.googlecode.com/hg/1.10.1-beta/com/google
/api/client/extensions/servlet/auth/oauth2/AbstractAuthorizationCodeServlet.html
Java environments (e.g. Java 6, Android 2.3, App Engine, or All)?
servlet, appengine
Please describe the feature requested.
Idea that we should add a AbstractGoogleAuthorizationCodeServlet that is just
like AbstractAuthorizationCodeServlet but uses GoogleAuthorizationCodeFlow for
the flow and GoogleCredential for the credential to enable additional
functionality found only in those subclasses. If we do this, we would also
need AbstractGoogleAppEngineAuthorizationCodeServlet.
```
Original issue reported on code.google.com by `yan...@google.com` on 21 Aug 2012 at 8:29 | 1.0 | AbstractGoogleAuthorizationCodeServlet - ```
External references, such as a standards document, or specification?
http://javadoc.google-oauth-java-client.googlecode.com/hg/1.10.1-beta/com/google
/api/client/extensions/servlet/auth/oauth2/AbstractAuthorizationCodeServlet.html
Java environments (e.g. Java 6, Android 2.3, App Engine, or All)?
servlet, appengine
Please describe the feature requested.
Idea that we should add a AbstractGoogleAuthorizationCodeServlet that is just
like AbstractAuthorizationCodeServlet but uses GoogleAuthorizationCodeFlow for
the flow and GoogleCredential for the credential to enable additional
functionality found only in those subclasses. If we do this, we would also
need AbstractGoogleAppEngineAuthorizationCodeServlet.
```
Original issue reported on code.google.com by `yan...@google.com` on 21 Aug 2012 at 8:29 | priority | abstractgoogleauthorizationcodeservlet external references such as a standards document or specification api client extensions servlet auth abstractauthorizationcodeservlet html java environments e g java android app engine or all servlet appengine please describe the feature requested idea that we should add a abstractgoogleauthorizationcodeservlet that is just like abstractauthorizationcodeservlet but uses googleauthorizationcodeflow for the flow and googlecredential for the credential to enable additional functionality found only in those subclasses if we do this we would also need abstractgoogleappengineauthorizationcodeservlet original issue reported on code google com by yan google com on aug at | 1 |
70,019 | 3,316,422,055 | IssuesEvent | 2015-11-06 16:49:42 | TeselaGen/Peony-Issue-Tracking | https://api.github.com/repos/TeselaGen/Peony-Issue-Tracking | opened | Map view sometimes does not appear | Location: App Priority: Medium Type: Bug | _From @mfero on October 19, 2015 18:32_
Map view sometimes does not appear. Comes back when you toggle to different tab and back again.
(found during live demo at DAS!)
_Copied from original issue: TeselaGen/ve#1463_ | 1.0 | Map view sometimes does not appear - _From @mfero on October 19, 2015 18:32_
Map view sometimes does not appear. Comes back when you toggle to different tab and back again.
(found during live demo at DAS!)
_Copied from original issue: TeselaGen/ve#1463_ | priority | map view sometimes does not appear from mfero on october map view sometimes does not appear comes back when you toggle to different tab and back again found during live demo at das copied from original issue teselagen ve | 1 |
166,605 | 6,307,496,313 | IssuesEvent | 2017-07-22 01:51:38 | jasonwynn10/MyPlot | https://api.github.com/repos/jasonwynn10/MyPlot | closed | /p home Only works in the world your plot is in | API 3 Category: Feature Request Priority: Medium Status: Work In Progress | <!-- put an 'x' in the brackets -->
- [x] This issue isn't duplicated - you can check if it is by using the search bar located at the top left hand corner and select "Issues" on the left.
- [x] This issue includes appropriate markdown for sections - e.g. code blocks for crash dumps.
- [x] This issue is understandable - feel free to use your native language to write issues if you are not comfortable with English.
<!-- ISSUE DESCRIPTION - write a SHORT title about what problem you're having. -->
When you type /p home it says you do not own any plots. This is fixed if you go into the plot world your plots is in. But, this issue doesn't allow people to tp to their plots from spawn.
<!-- REPRODUCE ISSUE STEPS - how can this issue be reproduced? -->
## Reproducing the issue
1. Type /p home outside of a plot world
<!-- CLIENT INFORMATION - what is the plugin version, PHP version, and server build you're running? -->
## Client information
PocketMine-MP Version:
Plugin Version:
PHP version: 7.0.13 (default)
<!-- OPTIONAL INFORMATION - use this section for posting crash dumps, backtraces or other files(please use code markdown!) -->
## Optional information
| 1.0 | /p home Only works in the world your plot is in - <!-- put an 'x' in the brackets -->
- [x] This issue isn't duplicated - you can check if it is by using the search bar located at the top left hand corner and select "Issues" on the left.
- [x] This issue includes appropriate markdown for sections - e.g. code blocks for crash dumps.
- [x] This issue is understandable - feel free to use your native language to write issues if you are not comfortable with English.
<!-- ISSUE DESCRIPTION - write a SHORT title about what problem you're having. -->
When you type /p home it says you do not own any plots. This is fixed if you go into the plot world your plots is in. But, this issue doesn't allow people to tp to their plots from spawn.
<!-- REPRODUCE ISSUE STEPS - how can this issue be reproduced? -->
## Reproducing the issue
1. Type /p home outside of a plot world
<!-- CLIENT INFORMATION - what is the plugin version, PHP version, and server build you're running? -->
## Client information
PocketMine-MP Version:
Plugin Version:
PHP version: 7.0.13 (default)
<!-- OPTIONAL INFORMATION - use this section for posting crash dumps, backtraces or other files(please use code markdown!) -->
## Optional information
| priority | p home only works in the world your plot is in this issue isn t duplicated you can check if it is by using the search bar located at the top left hand corner and select issues on the left this issue includes appropriate markdown for sections e g code blocks for crash dumps this issue is understandable feel free to use your native language to write issues if you are not comfortable with english when you type p home it says you do not own any plots this is fixed if you go into the plot world your plots is in but this issue doesn t allow people to tp to their plots from spawn reproducing the issue type p home outside of a plot world client information pocketmine mp version plugin version php version default optional information | 1 |
249,288 | 7,959,663,282 | IssuesEvent | 2018-07-13 02:19:34 | mit-cml/appinventor-sources | https://api.github.com/repos/mit-cml/appinventor-sources | closed | Map zoom controls do not work in Fixed sizing mode | affects: ucr bug issue: accepted priority: medium status: in progress | [From the forum](https://groups.google.com/d/msg/mitappinventortest/gfoFJBJhYVE/RMk4dnvPAQAJ): The zoom controls for the Map component do not work when in Fixed sizing mode and on a screen where the display isn't medium density. The controls are drawn in the wrong position and do not appear to respond to touch events. My hypothesis is that the touch zones and the drawn controls are not aligned, making it look as though the controls are not responding to events. Switching to Responsive mode resolved the issue and the controls are in the "right" position, bottom center of the map. | 1.0 | Map zoom controls do not work in Fixed sizing mode - [From the forum](https://groups.google.com/d/msg/mitappinventortest/gfoFJBJhYVE/RMk4dnvPAQAJ): The zoom controls for the Map component do not work when in Fixed sizing mode and on a screen where the display isn't medium density. The controls are drawn in the wrong position and do not appear to respond to touch events. My hypothesis is that the touch zones and the drawn controls are not aligned, making it look as though the controls are not responding to events. Switching to Responsive mode resolved the issue and the controls are in the "right" position, bottom center of the map. | priority | map zoom controls do not work in fixed sizing mode the zoom controls for the map component do not work when in fixed sizing mode and on a screen where the display isn t medium density the controls are drawn in the wrong position and do not appear to respond to touch events my hypothesis is that the touch zones and the drawn controls are not aligned making it look as though the controls are not responding to events switching to responsive mode resolved the issue and the controls are in the right position bottom center of the map | 1 |
413,029 | 12,059,424,398 | IssuesEvent | 2020-04-15 19:15:00 | cds-snc/report-a-cybercrime | https://api.github.com/repos/cds-snc/report-a-cybercrime | closed | Updated business form | medium priority | ## Summary
We have added different fields to the business page for victims who are reporting a cybrecrime related to their business.
## Design detail
We have added different fields to the business page for victims who are reporting a cybrecrime related to their business.


https://www.figma.com/file/R054FzUOrpP13oTbMsQevj/Sprint-9?node-id=3521%3A6371
## Unresolved questions
This is related to the change of the "What Happened" page, but this has been created as a separate issue following this. | 1.0 | Updated business form - ## Summary
We have added different fields to the business page for victims who are reporting a cybrecrime related to their business.
## Design detail
We have added different fields to the business page for victims who are reporting a cybrecrime related to their business.


https://www.figma.com/file/R054FzUOrpP13oTbMsQevj/Sprint-9?node-id=3521%3A6371
## Unresolved questions
This is related to the change of the "What Happened" page, but this has been created as a separate issue following this. | priority | updated business form summary we have added different fields to the business page for victims who are reporting a cybrecrime related to their business design detail we have added different fields to the business page for victims who are reporting a cybrecrime related to their business unresolved questions this is related to the change of the what happened page but this has been created as a separate issue following this | 1 |
3,248 | 2,537,525,685 | IssuesEvent | 2015-01-26 21:11:37 | web2py/web2py | https://api.github.com/repos/web2py/web2py | opened | auth.settings.extra_fields not available | 1 star bug imported Priority-Medium | _From [mcbo..._at_gmail.com](https://code.google.com/u/100411895927358784444/) on October 18, 2014 01:08:06_
What steps will reproduce the problem? 1. auth.settings.extra_fields['auth_user'] = [Field('age', compute=lambda r: 45)]
2. pass auth_user=auth.user to the view
3. in view {{=auth_user.age}} What is the expected output? What do you see instead? Should see "45" in form What version of the product are you using? On what operating system? Ubuntu 14.04
2.9.11-stable+timestamp.2014.09.15.23.35.11 (Running on Rocket 1.2.6, Python 2.7.6) Please provide any additional information below. ## create all tables needed by auth if not custom tables
Gender = ['Male', 'Female', 'Other']
def age(born):
tdelta = 999
try:
tdelta = int((datetime.now() - born).days / 365.25)
except Exception, e:
pass
\#print('Age: ', tdelta)
return tdelta
day, month, year = [int(x) for x in "10/8/1969".split("/")]
born = datetime(year, month, day)
print('00_db -> Age: ', age(born))
t = (datetime.now() - born)
print(type(t), t.total_seconds() / (24*3600*365.25))
print(type(t), t.days / 365.25)
\#print('00_db -> Age: ', (datetime.now() - born)).total_seconds()
auth.settings.extra_fields['auth_user'] = [
Field('birth', 'date'),
\#Field('age', compute=lambda r: age(r['birth'])),
\#Field('age', compute=lambda r:int((datetime.now() - r['birth']).days / 365.25)),
Field('age', compute=lambda r: 45),
Field('test1', compute=lambda r: int(2*2)),
Field('test2', compute=lambda row: row.email),
Field('gender', requires=IS_IN_SET(Gender)),
Field('address', 'text'),
Field('zip'),
Field('city'),
Field('country'),
Field('phone'),
Field('picture', 'upload', default='')
]
auth.define_tables(username=False, signature=True)
\# --------------------------------------------------
In controller: return dict(form=form, membership_panel=membership_panel, user=user, auth_user=auth.user)
Both user and auth_user shows the computed fields as None in the view
_Original issue: http://code.google.com/p/web2py/issues/detail?id=2000_ | 1.0 | auth.settings.extra_fields not available - _From [mcbo..._at_gmail.com](https://code.google.com/u/100411895927358784444/) on October 18, 2014 01:08:06_
What steps will reproduce the problem? 1. auth.settings.extra_fields['auth_user'] = [Field('age', compute=lambda r: 45)]
2. pass auth_user=auth.user to the view
3. in view {{=auth_user.age}} What is the expected output? What do you see instead? Should see "45" in form What version of the product are you using? On what operating system? Ubuntu 14.04
2.9.11-stable+timestamp.2014.09.15.23.35.11 (Running on Rocket 1.2.6, Python 2.7.6) Please provide any additional information below. ## create all tables needed by auth if not custom tables
Gender = ['Male', 'Female', 'Other']
def age(born):
tdelta = 999
try:
tdelta = int((datetime.now() - born).days / 365.25)
except Exception, e:
pass
\#print('Age: ', tdelta)
return tdelta
day, month, year = [int(x) for x in "10/8/1969".split("/")]
born = datetime(year, month, day)
print('00_db -> Age: ', age(born))
t = (datetime.now() - born)
print(type(t), t.total_seconds() / (24*3600*365.25))
print(type(t), t.days / 365.25)
\#print('00_db -> Age: ', (datetime.now() - born)).total_seconds()
auth.settings.extra_fields['auth_user'] = [
Field('birth', 'date'),
\#Field('age', compute=lambda r: age(r['birth'])),
\#Field('age', compute=lambda r:int((datetime.now() - r['birth']).days / 365.25)),
Field('age', compute=lambda r: 45),
Field('test1', compute=lambda r: int(2*2)),
Field('test2', compute=lambda row: row.email),
Field('gender', requires=IS_IN_SET(Gender)),
Field('address', 'text'),
Field('zip'),
Field('city'),
Field('country'),
Field('phone'),
Field('picture', 'upload', default='')
]
auth.define_tables(username=False, signature=True)
\# --------------------------------------------------
In controller: return dict(form=form, membership_panel=membership_panel, user=user, auth_user=auth.user)
Both user and auth_user shows the computed fields as None in the view
_Original issue: http://code.google.com/p/web2py/issues/detail?id=2000_ | priority | auth settings extra fields not available from on october what steps will reproduce the problem auth settings extra fields pass auth user auth user to the view in view auth user age what is the expected output what do you see instead should see in form what version of the product are you using on what operating system ubuntu stable timestamp running on rocket python please provide any additional information below create all tables needed by auth if not custom tables gender def age born tdelta try tdelta int datetime now born days except exception e pass print age tdelta return tdelta day month year born datetime year month day print db age age born t datetime now born print type t t total seconds print type t t days print db age datetime now born total seconds auth settings extra fields field birth date field age compute lambda r age r field age compute lambda r int datetime now r days field age compute lambda r field compute lambda r int field compute lambda row row email field gender requires is in set gender field address text field zip field city field country field phone field picture upload default auth define tables username false signature true in controller return dict form form membership panel membership panel user user auth user auth user both user and auth user shows the computed fields as none in the view original issue | 1 |
734,396 | 25,347,559,786 | IssuesEvent | 2022-11-19 11:33:19 | pystardust/ani-cli | https://api.github.com/repos/pystardust/ani-cli | closed | Manpage missing on Arch Linux-based systems | type: bug priority 2: medium | Version: 3.4.2
OS: EndeavourOS
Shell: fish, bash
No manpage despite being there being one present in the source code.
**Steps To Reproduce**
1. Run `man ani-cli`
2. <details><summary>Get error message stating there's no manual entry for ani-cli.</summary>
<p>

</p>
</details>
**Expected behavior**
The manpage written for Debian systems to appear.
**Additional context**
I've previously used ani-cli on Kubuntu where this wasn't an issue. I'm aware that the manpage was intended for use on Debian systems, but [it states that it can be used for any system](https://github.com/pystardust/ani-cli/blob/955a603602f28dde6995ecd530f897d6625e1bb8/ani-cli.1.gz#L91). | 1.0 | Manpage missing on Arch Linux-based systems - Version: 3.4.2
OS: EndeavourOS
Shell: fish, bash
No manpage despite being there being one present in the source code.
**Steps To Reproduce**
1. Run `man ani-cli`
2. <details><summary>Get error message stating there's no manual entry for ani-cli.</summary>
<p>

</p>
</details>
**Expected behavior**
The manpage written for Debian systems to appear.
**Additional context**
I've previously used ani-cli on Kubuntu where this wasn't an issue. I'm aware that the manpage was intended for use on Debian systems, but [it states that it can be used for any system](https://github.com/pystardust/ani-cli/blob/955a603602f28dde6995ecd530f897d6625e1bb8/ani-cli.1.gz#L91). | priority | manpage missing on arch linux based systems version os endeavouros shell fish bash no manpage despite being there being one present in the source code steps to reproduce run man ani cli get error message stating there s no manual entry for ani cli expected behavior the manpage written for debian systems to appear additional context i ve previously used ani cli on kubuntu where this wasn t an issue i m aware that the manpage was intended for use on debian systems but | 1 |
100,216 | 4,081,307,380 | IssuesEvent | 2016-05-31 08:23:45 | w3c/browser-payment-api | https://api.github.com/repos/w3c/browser-payment-api | closed | Storing card information | Cat: Core Functionality Doc:BasicCardPaymentMethod Priority: Medium question | The flow in the current "Basic Card" spec has an annotation to the effect of "Merchant can store card details for future use (aka 'card on file')." I think this is actually behavior we want to discourage very strongly, rather than encourage.
The current web environment -- absent webpayments -- does lead to a situation where it is very much in merchants' interests to store credit card information, as the only other alternative is requiring customers to re-enter the information for each purchase. With the API that we're designing, this rationale goes away completely: since the user agent will store credit card information, the merchant site only needs to call the API to retrieve the card information whenever it is needed.
This provides a number of benefits.
First, it removes persistently stored credit card information from the middle of the network, where is it demonstrably vulnerable to capture by hostile parties. There have been a large number of high-profile cases recently that arise only because of the tendency to store card information. We can help the web move away from that.
Second, it provides users the convenience of only needing to update changed credit card information once -- in their user agent -- rather than once per merchant. Since the merchant can interact with the UA to retrieve completely up-to-date information, we can eliminate the friction of web sites having to request updated expiration dates, and eliminate the hassle of updating myriad web sites when assigned a new credit card number (e.g., due to a lost card).
Finally, this approach provides the user additional information, agency, and control over their information, as they can be presented with indicia and/or controls any time payment details are accessed.
I would propose (a) removing the suggestion of storing credit card information from the flow, and (b) adding text strongly discouraging sites from storing credit card numbers, in favor of querying the user agent each time. | 1.0 | Storing card information - The flow in the current "Basic Card" spec has an annotation to the effect of "Merchant can store card details for future use (aka 'card on file')." I think this is actually behavior we want to discourage very strongly, rather than encourage.
The current web environment -- absent webpayments -- does lead to a situation where it is very much in merchants' interests to store credit card information, as the only other alternative is requiring customers to re-enter the information for each purchase. With the API that we're designing, this rationale goes away completely: since the user agent will store credit card information, the merchant site only needs to call the API to retrieve the card information whenever it is needed.
This provides a number of benefits.
First, it removes persistently stored credit card information from the middle of the network, where is it demonstrably vulnerable to capture by hostile parties. There have been a large number of high-profile cases recently that arise only because of the tendency to store card information. We can help the web move away from that.
Second, it provides users the convenience of only needing to update changed credit card information once -- in their user agent -- rather than once per merchant. Since the merchant can interact with the UA to retrieve completely up-to-date information, we can eliminate the friction of web sites having to request updated expiration dates, and eliminate the hassle of updating myriad web sites when assigned a new credit card number (e.g., due to a lost card).
Finally, this approach provides the user additional information, agency, and control over their information, as they can be presented with indicia and/or controls any time payment details are accessed.
I would propose (a) removing the suggestion of storing credit card information from the flow, and (b) adding text strongly discouraging sites from storing credit card numbers, in favor of querying the user agent each time. | priority | storing card information the flow in the current basic card spec has an annotation to the effect of merchant can store card details for future use aka card on file i think this is actually behavior we want to discourage very strongly rather than encourage the current web environment absent webpayments does lead to a situation where it is very much in merchants interests to store credit card information as the only other alternative is requiring customers to re enter the information for each purchase with the api that we re designing this rationale goes away completely since the user agent will store credit card information the merchant site only needs to call the api to retrieve the card information whenever it is needed this provides a number of benefits first it removes persistently stored credit card information from the middle of the network where is it demonstrably vulnerable to capture by hostile parties there have been a large number of high profile cases recently that arise only because of the tendency to store card information we can help the web move away from that second it provides users the convenience of only needing to update changed credit card information once in their user agent rather than once per merchant since the merchant can interact with the ua to retrieve completely up to date information we can eliminate the friction of web sites having to request updated expiration dates and eliminate the hassle of updating myriad web sites when assigned a new credit card number e g due to a lost card finally this approach provides the user additional information agency and control over their information as they can be presented with indicia and or controls any time payment details are accessed i would propose a removing the suggestion of storing credit card information from the flow and b adding text strongly discouraging sites from storing credit card numbers in favor of querying the user agent each time | 1 |
51,176 | 3,011,337,387 | IssuesEvent | 2015-07-28 17:22:26 | CenterForOpenScience/osf.io | https://api.github.com/repos/CenterForOpenScience/osf.io | closed | some search results offer "jump to" but not all | 5 - pending review bug: production intern priority - medium | ## Steps
1. Search the OSF for something that will yield many results (try "reproducibility project: psychology")
2. Look at the various results
## Outcome
1. Some projects show a "jump to: wiki | files"
2. Some projects do not show that option, even if they do have a wiki or files uploaded

## Expected outcome
1. All projects should show the "jump to" option if there is wiki content or an uploaded file
| 1.0 | some search results offer "jump to" but not all - ## Steps
1. Search the OSF for something that will yield many results (try "reproducibility project: psychology")
2. Look at the various results
## Outcome
1. Some projects show a "jump to: wiki | files"
2. Some projects do not show that option, even if they do have a wiki or files uploaded

## Expected outcome
1. All projects should show the "jump to" option if there is wiki content or an uploaded file
| priority | some search results offer jump to but not all steps search the osf for something that will yield many results try reproducibility project psychology look at the various results outcome some projects show a jump to wiki files some projects do not show that option even if they do have a wiki or files uploaded expected outcome all projects should show the jump to option if there is wiki content or an uploaded file | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.